期刊文献+
共找到270篇文章
< 1 2 14 >
每页显示 20 50 100
Chinese multi-document personal name disambiguation 被引量:8
1
作者 Wang Houfeng(王厚峰) Mei Zheng 《High Technology Letters》 EI CAS 2005年第3期280-283,共4页
This paper presents a new approach to determining whether an interested personal name across doeuments refers to the same entity. Firstly,three vectors for each text are formed: the personal name Boolean vectors deno... This paper presents a new approach to determining whether an interested personal name across doeuments refers to the same entity. Firstly,three vectors for each text are formed: the personal name Boolean vectors denoting whether a personal name occurs the text the biographical word Boolean vector representing title, occupation and so forth, and the feature vector with real values. Then, by combining a heuristic strategy based on Boolean vectors with an agglomeratie clustering algorithm based on feature vectors, it seeks to resolve multi-document personal name coreference. Experimental results show that this approach achieves a good performance by testing on "Wang Gang" corpus. 展开更多
关键词 personal name disambiguation Chinese multi-document heuristic strategy. agglomerative clustering
下载PDF
Using AdaBoost Meta-Learning Algorithm for Medical News Multi-Document Summarization 被引量:1
2
作者 Mahdi Gholami Mehr 《Intelligent Information Management》 2013年第6期182-190,共9页
Automatic text summarization involves reducing a text document or a larger corpus of multiple documents to a short set of sentences or paragraphs that convey the main meaning of the text. In this paper, we discuss abo... Automatic text summarization involves reducing a text document or a larger corpus of multiple documents to a short set of sentences or paragraphs that convey the main meaning of the text. In this paper, we discuss about multi-document summarization that differs from the single one in which the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries. Since the number and variety of online medical news make them difficult for experts in the medical field to read all of the medical news, an automatic multi-document summarization can be useful for easy study of information on the web. Hence we propose a new approach based on machine learning meta-learner algorithm called AdaBoost that is used for summarization. We treat a document as a set of sentences, and the learning algorithm must learn to classify as positive or negative examples of sentences based on the score of the sentences. For this learning task, we apply AdaBoost meta-learning algorithm where a C4.5 decision tree has been chosen as the base learner. In our experiment, we use 450 pieces of news that are downloaded from different medical websites. Then we compare our results with some existing approaches. 展开更多
关键词 multi-document SUMMARIZATION Machine Learning Decision Trees ADABOOST C4.5 MEDICAL Document SUMMARIZATION
下载PDF
Density peaks clustering based integrate framework for multi-document summarization 被引量:2
3
作者 BaoyanWang Jian Zhang +1 位作者 Yi Liu Yuexian Zou 《CAAI Transactions on Intelligence Technology》 2017年第1期26-30,共5页
We present a novel unsupervised integrated score framework to generate generic extractive multi- document summaries by ranking sentences based on dynamic programming (DP) strategy. Considering that cluster-based met... We present a novel unsupervised integrated score framework to generate generic extractive multi- document summaries by ranking sentences based on dynamic programming (DP) strategy. Considering that cluster-based methods proposed by other researchers tend to ignore informativeness of words when they generate summaries, our proposed framework takes relevance, diversity, informativeness and length constraint of sentences into consideration comprehensively. We apply Density Peaks Clustering (DPC) to get relevance scores and diversity scores of sentences simultaneously. Our framework produces the best performance on DUC2004, 0.396 of ROUGE-1 score, 0.094 of ROUGE-2 score and 0.143 of ROUGE-SU4 which outperforms a series of popular baselines, such as DUC Best, FGB [7], and BSTM [10]. 展开更多
关键词 multi-document summarization Integrated score framework Density peaks clustering Sentences rank
下载PDF
Constructing a taxonomy to support multi-document summarization of dissertation abstracts
4
作者 KHOO Christopher S.G. GOH Dion H. 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2005年第11期1258-1267,共10页
This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level an... This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level and micro-level discourse structure to identify important information that can be extracted from dissertation abstracts, and then uses a variable-based framework to integrate and organize extracted information across dissertation abstracts. This framework focuses more on research concepts and their research relationships found in sociology dissertation abstracts and has a hierarchical structure. A taxonomy is constructed to support the summarization process in two ways: (1) helping to identify important concepts and relations expressed in the text, and (2) providing a structure for linking similar concepts in different abstracts. This paper describes the variable-based framework and the summarization process, and then reports the construction of the taxonomy for supporting the summarization process. An example is provided to show how to use the constructed taxonomy to identify important concepts and integrate the concepts extracted from different abstracts. 展开更多
关键词 Text summarization Automatic multi-document summarization Variable-based framework Digital library
下载PDF
Unsupervised Graph-Based Tibetan Multi-Document Summarization
5
作者 Xiaodong Yan Yiqin Wang +3 位作者 Wei Song Xiaobing Zhao A.Run Yang Yanxing 《Computers, Materials & Continua》 SCIE EI 2022年第10期1769-1781,共13页
Text summarization creates subset that represents the most important or relevant information in the original content,which effectively reduce information redundancy.Recently neural network method has achieved good res... Text summarization creates subset that represents the most important or relevant information in the original content,which effectively reduce information redundancy.Recently neural network method has achieved good results in the task of text summarization both in Chinese and English,but the research of text summarization in low-resource languages is still in the exploratory stage,especially in Tibetan.What’s more,there is no large-scale annotated corpus for text summarization.The lack of dataset severely limits the development of low-resource text summarization.In this case,unsupervised learning approaches are more appealing in low-resource languages as they do not require labeled data.In this paper,we propose an unsupervised graph-based Tibetan multi-document summarization method,which divides a large number of Tibetan news documents into topics and extracts the summarization of each topic.Summarization obtained by using traditional graph-based methods have high redundancy and the division of documents topics are not detailed enough.In terms of topic division,we adopt two level clustering methods converting original document into document-level and sentence-level graph,next we take both linguistic and deep representation into account and integrate external corpus into graph to obtain the sentence semantic clustering.Improve the shortcomings of the traditional K-Means clustering method and perform more detailed clustering of documents.Then model sentence clusters into graphs,finally remeasure sentence nodes based on the topic semantic information and the impact of topic features on sentences,higher topic relevance summary is extracted.In order to promote the development of Tibetan text summarization,and to meet the needs of relevant researchers for high-quality Tibetan text summarization datasets,this paper manually constructs a Tibetan summarization dataset and carries out relevant experiments.The experiment results show that our method can effectively improve the quality of summarization and our method is competitive to previous unsupervised methods. 展开更多
关键词 multi-document summarization text clustering topic feature fusion graphic model
下载PDF
Research on multi-document summarization based on latent semantic indexing
6
作者 秦兵 刘挺 +1 位作者 张宇 李生 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2005年第1期91-94,共4页
A multi-document summarization method based on Latent Semantic Indexing (LSI) is proposed. The method combines several reports on the same issue into a matrix of terms and sentences, and uses a Singular Value Decompos... A multi-document summarization method based on Latent Semantic Indexing (LSI) is proposed. The method combines several reports on the same issue into a matrix of terms and sentences, and uses a Singular Value Decomposition (SVD) to reduce the dimension of the matrix and extract features, and then the sentence similarity is computed. The sentences are clustered according to similarity of sentences. The centroid sentences are selected from each class. Finally, the selected sentences are ordered to generate the summarization. The evaluation and results are presented, which prove that the proposed methods are efficient. 展开更多
关键词 multi-document summarization LSI (latent semantic indexing) CLUSTERING
下载PDF
TWO-STAGE SENTENCE SELECTION APPROACH FOR MULTI-DOCUMENT SUMMARIZATION
7
作者 Zhang Shu Zhao Tiejun Zheng Dequan Zhao Hua 《Journal of Electronics(China)》 2008年第4期562-567,共6页
Compared with the traditional method of adding sentences to get summary in multi-document summarization,a two-stage sentence selection approach based on deleting sentences in acandidate sentence set to generate summar... Compared with the traditional method of adding sentences to get summary in multi-document summarization,a two-stage sentence selection approach based on deleting sentences in acandidate sentence set to generate summary is proposed,which has two stages,the acquisition of acandidate sentence set and the optimum selection of sentence.At the first stage,the candidate sentenceset is obtained by redundancy-based sentence selection approach.At the second stage,optimum se-lection of sentences is proposed to delete sentences in the candidate sentence set according to itscontribution to the whole set until getting the appointed summary length.With a test corpus,theROUGE value of summaries gotten by the proposed approach proves its validity,compared with thetraditional method of sentence selection.The influence of the token chosen in the two-stage sentenceselection approach on the quality of the generated summaries is analyzed. 展开更多
关键词 TWO-STAGE Sentence selection approach multi-document summarization
下载PDF
Multi-Document Summarization Model Based on Integer Linear Programming
8
作者 Rasim Alguliev Ramiz Aliguliyev Makrufa Hajirahimova 《Intelligent Control and Automation》 2010年第2期105-111,共7页
This paper proposes an extractive generic text summarization model that generates summaries by selecting sentences according to their scores. Sentence scores are calculated using their extensive coverage of the main c... This paper proposes an extractive generic text summarization model that generates summaries by selecting sentences according to their scores. Sentence scores are calculated using their extensive coverage of the main content of the text, and summaries are created by extracting the highest scored sentences from the original document. The model formalized as a multiobjective integer programming problem. An advantage of this model is that it can cover the main content of source (s) and provide less redundancy in the generated sum- maries. To extract sentences which form a summary with an extensive coverage of the main content of the text and less redundancy, have been used the similarity of sentences to the original document and the similarity between sentences. Performance evaluation is conducted by comparing summarization outputs with manual summaries of DUC2004 dataset. Experiments showed that the proposed approach outperforms the related methods. 展开更多
关键词 multi-document SUMMARIZATION Content COVERAGE LESS REDUNDANCY INTEGER Linear Programming
下载PDF
Automatic Multi-Document Summarization Based on Keyword Density and Sentence-Word Graphs
9
作者 YE Feiyue XU Xinchen 《Journal of Shanghai Jiaotong university(Science)》 EI 2018年第4期584-592,共9页
As a fundamental and effective tool for document understanding and organization, multi-document summarization enables better information services by creating concise and informative reports for large collections of do... As a fundamental and effective tool for document understanding and organization, multi-document summarization enables better information services by creating concise and informative reports for large collections of documents. In this paper, we propose a sentence-word two layer graph algorithm combining with keyword density to generate the multi-document summarization, known as Graph & Keywordp. The traditional graph methods of multi-document summarization only consider the influence of sentence and word in all documents rather than individual documents. Therefore, we construct multiple word graph and extract right keywords in each document to modify the sentence graph and to improve the significance and richness of the summary. Meanwhile, because of the differences in the words importance in documents, we propose to use keyword density for the summaries to provide rich content while using a small number of words. The experiment results show that the Graph & Keywordp method outperforms the state of the art systems when tested on the Duc2004 data set. Key words: multi-document, graph algorithm, keyword density, Graph & Keywordp, Due2004 展开更多
关键词 multi-document graph algorithm keyword density Graph & Keywordρ Duc2004
原文传递
BHLM:Bayesian theory-based hybrid learning model for multi-document summarization
10
作者 S.Suneetha A.Venugopal Reddy 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2018年第2期229-250,共22页
In order to understand and organize the document in an efficient way,the multidocument summarization becomes the prominent technique in the Internet world.As the information available is in a large amount,it is necess... In order to understand and organize the document in an efficient way,the multidocument summarization becomes the prominent technique in the Internet world.As the information available is in a large amount,it is necessary to summarize the document for obtaining the condensed information.To perform the multi-document summarization,a new Bayesian theory-based Hybrid Learning Model(BHLM)is proposed in this paper.Initially,the input documents are preprocessed,where the stop words are removed from the document.Then,the feature of the sentence is extracted to determine the sentence score for summarizing the document.The extracted feature is then fed into the hybrid learning model for learning.Subsequently,learning feature,training error and correlation coefficient are integrated with the Bayesian model to develop BHLM.Also,the proposed method is used to assign the class label assisted by the mean,variance and probability measures.Finally,based on the class label,the sentences are sorted out to generate the final summary of the multi-document.The experimental results are validated in MATLAB,and the performance is analyzed using the metrics,precision,recall,F-measure and rouge-1.The proposed model attains 99.6%precision and 75%rouge-1 measure,which shows that the model can provide the final summary efficiently. 展开更多
关键词 multi-document text feature sentence score hybrid learning model Bayesian theory
原文传递
基于多粒度阅读器和图注意力网络的文档级事件抽取
11
作者 薛颂东 李永豪 赵红燕 《计算机应用研究》 CSCD 北大核心 2024年第8期2329-2335,共7页
文档级事件抽取面临论元分散和多事件两大挑战,已有工作大多采用逐句抽取候选论元的方式,难以建模跨句的上下文信息。为此,提出了一种基于多粒度阅读器和图注意网络的文档级事件抽取模型,采用多粒度阅读器实现多层次语义编码,通过图注... 文档级事件抽取面临论元分散和多事件两大挑战,已有工作大多采用逐句抽取候选论元的方式,难以建模跨句的上下文信息。为此,提出了一种基于多粒度阅读器和图注意网络的文档级事件抽取模型,采用多粒度阅读器实现多层次语义编码,通过图注意力网络捕获实体对之间的局部和全局关系,构建基于实体对相似度的剪枝完全图作为伪触发器,全面捕捉文档中的事件和论元。在公共数据集ChFinAnn和DuEE-Fin上进行了实验,结果表明提出的方法改善了论元分散问题,提升了模型事件抽取性能。 展开更多
关键词 多粒度阅读器 图注意力网络 文档级事件抽取
下载PDF
知识关联视角下标准文档的多粒度知识组织方法研究
12
作者 范昊 王一帆 《信息资源管理学报》 CSSCI 2024年第4期133-145,共13页
传统的文档组织方式无法应对标准数字化发展形势,有必要充分发掘标准文档中的多粒度知识单元及其语义关联,探索能够高效运用标准知识的新型组织方法,为优化标准供给提供参考。从知识关联视角出发,提出一种面向标准文档的多粒度、富语义... 传统的文档组织方式无法应对标准数字化发展形势,有必要充分发掘标准文档中的多粒度知识单元及其语义关联,探索能够高效运用标准知识的新型组织方法,为优化标准供给提供参考。从知识关联视角出发,提出一种面向标准文档的多粒度、富语义的通用知识组织方法。首先,基于知识粒度理论,依据标准文档的知识内容和需求特征进行多粒度的知识划分与描述;其次,从知识层级、文档特征、文本逻辑、时空演化等方面认知和发现标准多粒度知识间的语义关联模式与类型;最后,采用本体构建方法实现标准文档的多粒度知识组织,并通过知识实例的添加来实现本体验证与价值阐述。多粒度知识关联的标准组织方法能够完整揭示标准文档中的多粒度知识单元,形成联通广泛的知识层次与关联,有助于标准知识在多种服务场景中被有效获取、共享与重用,既推进了适应数智时代的标准资源建设,又丰富了多粒度知识驱动的文档内容挖掘与利用。 展开更多
关键词 标准文档 知识组织 语义关联 多粒度知识 本体构建
下载PDF
基于孪生网络文本语义匹配的多文档摘要
13
作者 钟琪 王中卿 王红玲 《中文信息学报》 CSCD 北大核心 2024年第5期107-116,共10页
多文档摘要旨在从一组主题相关的文档集中抽取出最能代表文档集中心内容的句子作为摘要,文本语义匹配则是指学习两个文本单元之间的语义关系,使句子表征具有更加丰富的语义信息。该文提出了一种基于孪生网络文本语义匹配的多文档抽取式... 多文档摘要旨在从一组主题相关的文档集中抽取出最能代表文档集中心内容的句子作为摘要,文本语义匹配则是指学习两个文本单元之间的语义关系,使句子表征具有更加丰富的语义信息。该文提出了一种基于孪生网络文本语义匹配的多文档抽取式摘要方法,该方法将孪生网络和预训练语言模型BERT相结合,构建一个文本语义匹配与文本摘要联合学习模型。该模型运用孪生网络从不同的视角考察任意两个文本单元之间的语义关联,学习文档集中碎片化的信息,进一步对重要信息进行评估,最后结合文本摘要模型选择出更能代表文档集主要内容的句子组织成摘要。实验结果表明,该文所提方法和当前主流的多文档抽取式摘要方法相比,在ROUGE评价指标上有较大提升。 展开更多
关键词 多文档抽取式摘要 语义关系 预训练语言模型
下载PDF
结合预训练的多文档摘要:研究
14
作者 丁一 王中卿 《计算机科学》 CSCD 北大核心 2024年第S01期174-181,共8页
新闻文本摘要任务旨在从庞大复杂的新闻文本中快速准确地提炼出简明扼要的摘要。基于预训练语言模型对多文档摘要进行研究,重点研究结合预训练任务的具体模型训练方式对模型效果提升的作用,强化多文档之间的信息交流,以生成更全面、更... 新闻文本摘要任务旨在从庞大复杂的新闻文本中快速准确地提炼出简明扼要的摘要。基于预训练语言模型对多文档摘要进行研究,重点研究结合预训练任务的具体模型训练方式对模型效果提升的作用,强化多文档之间的信息交流,以生成更全面、更简练的摘要。对于结合预训练任务,提出对基线模型、预训练任务内容、预训练任务数量、预训练任务顺序的对比实验,探索标记了行之有效的预训练任务,总结归纳了强化多文档之间的信息交流的具体方法,精炼提出了简明高效的预训练流程。在公开新闻多文档数据集上进行训练和测试,实验结果表明预训练任务的内容、数量、顺序对ROUGE值都有一定提升,并且整合三者结论提出的特定预训练组合对ROUGE值有明显提升。 展开更多
关键词 新闻 摘要: 预训练 多文档 信息交流
下载PDF
两阶段文档筛选和异步多粒度图多跳问答
15
作者 张雪松 李冠君 +3 位作者 聂士佳 张大伟 吕钊 陶建华 《计算机技术与发展》 2024年第1期121-127,共7页
多跳问答旨在通过对多篇文档内容进行推理,来预测问题答案以及针对答案的支撑事实。然而当前的多跳问答方法在文档筛选任务中旨在找到与问题相关的所有文档,未考虑到这些文档是否都对找到答案有所帮助。因此,该文提出一种两阶段的文档... 多跳问答旨在通过对多篇文档内容进行推理,来预测问题答案以及针对答案的支撑事实。然而当前的多跳问答方法在文档筛选任务中旨在找到与问题相关的所有文档,未考虑到这些文档是否都对找到答案有所帮助。因此,该文提出一种两阶段的文档筛选方法。第一阶段通过对文档进行评分且设置较小的阈值来获取尽可能多的与问题相关文档,保证文档的高召回率;第二阶段对问题答案的推理路径进行建模,在第一阶段的基础上再次提取文档,保证文档的高精确率。此外,针对由文档构成的多粒度图,提出一种新颖的异步更新机制来进行答案预测以及支撑事实预测。提出的异步更新机制将多粒度图分为异质图和同质图来进行异步更新以更好地进行多跳推理。该方法在性能上优于目前主流的多跳问答方法,验证了该方法的有效性。 展开更多
关键词 多跳问答 文档筛选 多粒度图 异步更新 答案预测
下载PDF
基于异构图分层学习的细粒度多文档摘要抽取
16
作者 翁裕源 许柏炎 蔡瑞初 《计算机工程》 CAS CSCD 北大核心 2024年第3期336-344,共9页
抽取的目标是在多个文档中提取共有关键信息,其对简洁性的要求高于单文档摘要抽取。现有的多文档摘要抽取方法通常在句子级别进行建模,容易引入较多的冗余信息。为了解决上述问题,提出一种基于异构图分层学习的多文档摘要抽取框架,通过... 抽取的目标是在多个文档中提取共有关键信息,其对简洁性的要求高于单文档摘要抽取。现有的多文档摘要抽取方法通常在句子级别进行建模,容易引入较多的冗余信息。为了解决上述问题,提出一种基于异构图分层学习的多文档摘要抽取框架,通过层次化构建单词层级图和子句层级图来有效建模语义关系和结构关系。针对单词层级图和子句层级图这2个异构图的学习问题,设计具有不同层次更新机制的两层学习层来降低学习多种结构关系的难度。在单词层级图学习层,提出交替更新机制更新不同的粒度节点,以单词节点为载体通过图注意网络进行语义信息传递;在子句层级图学习层,提出两阶段分步学习更新机制聚合多种结构关系,第一阶段聚合同构关系,第二阶段基于注意力聚合异构关系。实验结果表明,与抽取式基准模型相比,该框架在Multinews数据集上取得了显著的性能提升,ROUGE-1、ROUGE-2和ROUGE-L分别提高0.88、0.23和2.27,消融实验结果也验证了两层学习层及其层次更新机制的有效性。 展开更多
关键词 抽取式多文档摘要 细粒度建模 异构图 分层学习 语义关系 结构关系
下载PDF
基于审判逻辑步骤的裁判文书摘要生成方法 被引量:1
17
作者 余帅 宋玉梅 +2 位作者 秦永彬 黄瑞章 陈艳平 《计算机工程与应用》 CSCD 北大核心 2024年第4期113-121,共9页
面向裁判文书的司法摘要是提升裁判文书分析能力的关键技术。裁判文书作为审判活动的载体,精准地呈现了案件的审判逻辑,但目前针对裁判文书的摘要方法只关注裁判文书的序列化信息,忽视了裁判文书的逻辑结构,且不能有效解决文本过长、信... 面向裁判文书的司法摘要是提升裁判文书分析能力的关键技术。裁判文书作为审判活动的载体,精准地呈现了案件的审判逻辑,但目前针对裁判文书的摘要方法只关注裁判文书的序列化信息,忽视了裁判文书的逻辑结构,且不能有效解决文本过长、信息冗余等问题。提出基于审判逻辑步骤的裁判文书摘要生成方法,采取“抽取+生成”相结合的方式,在抽取部分利用多标签分类方法,依据人民法院审理案件的逻辑步骤抽取出“类型、诉请、事实、结果”四个句子集合,在生成部分由微调后的T5-PEGASUS模型得到摘要。利用基于内部知识的最大相似度匹配算法对“事实”部分的输入文本进行降噪处理,进一步改善了摘要效果。实验结果表明,相比于主流的指针生成网络模型,该方法在ROUGE-1、ROUGE-2和ROUGE-L的F1指标上分别提升了17.99个百分点、21.24个百分点、21.86个百分点,说明在司法摘要任务中引入逻辑结构能够提升性能。 展开更多
关键词 裁判文书 审判逻辑步骤 多标签分类 内部知识 生成式摘要
下载PDF
基于异质图神经网络预训练的多标签文档分类研究
18
作者 吴家伟 方全 +1 位作者 胡骏 钱胜胜 《计算机科学》 CSCD 北大核心 2024年第1期143-149,共7页
多标签文档分类是一种将文档实例与相关标签相关联的技术,近年来受到越来越多研究者的关注。现有的多标签文档分类方法尝试探索文本之外的信息的融合,如文档元数据或标签结构。然而,这些方法要么简单地利用元数据的语义信息,要么没有考... 多标签文档分类是一种将文档实例与相关标签相关联的技术,近年来受到越来越多研究者的关注。现有的多标签文档分类方法尝试探索文本之外的信息的融合,如文档元数据或标签结构。然而,这些方法要么简单地利用元数据的语义信息,要么没有考虑标签的长尾分布,因此忽略了文档及其元数据之间的高阶关系和标签的分布规律等信息,从而影响到多标签文档分类的准确性。因此,文中提出一种新的基于异质图神经网络预训练的多标签文档分类方法。该方法通过构造文档与其元数据的异质图,采用两种对比学习预训练方法捕获文档与其元数据之间的关系,并通过平衡标签长尾分布的损失函数来提高多标签文档分类的准确性。在基准数据集上的实验结果表明,所提方法的准确率比Transformer提高了8%,比BertXML提高了4.75%,比MATCH提高了1.3%。 展开更多
关键词 多标签文档分类 元数据 异质图神经网络 预训练 长尾分布
下载PDF
引入主题节点的异构图舆情摘要方法
19
作者 宝日彤 曾淼瑞 孙海春 《科学技术与工程》 北大核心 2024年第23期9965-9972,共8页
微博等社交软件承载着网民对社会舆论事件的不同观点,如何在海量主题评论中识别出有价值的信息已经成为重要课题。提出了一种基于异构图的舆情摘要方法,有效提取热点事件的主流观点,便于引导化解互联网舆情危机。针对多文档摘要任务中... 微博等社交软件承载着网民对社会舆论事件的不同观点,如何在海量主题评论中识别出有价值的信息已经成为重要课题。提出了一种基于异构图的舆情摘要方法,有效提取热点事件的主流观点,便于引导化解互联网舆情危机。针对多文档摘要任务中难以捕捉跨文档语义关系的难点问题,将主题节点引入评论句子图从而挖掘出输入文档间的潜在语义关联。具体地,抽取评论的主题并构建包含主题节点的异构图模型,利用图注意力机制进行不同粒度节点语义信息的交互,最后结合最大边界相关算法进行候选摘要句子的抽取。实验结果显示,改进模型在英文通用Multi-News数据集上Rouge1、Rouge2、,RougeL分数分别提升了0.46%、0.46%、0.48%;与已有Textrank、Sumpip等热点模型对比,在自制微博评论数据集上该模型性能达到最好。 展开更多
关键词 多文档摘要 舆情摘要 主题节点 图注意力机制 微博评论摘要
下载PDF
基于多关系视图轴向注意力的文档级关系抽取
20
作者 吴皓 周刚 +2 位作者 卢记仓 刘洪波 陈静 《计算机科学》 CSCD 北大核心 2024年第10期337-343,共7页
文档级关系抽取旨在从文档中提取多个实体之间的关系。针对现有工作在不同关系类型的条件下,对于实体间的多跳推理能力受限的问题,提出了一种基于多关系视图轴向注意力的文档级关系抽取模型。该模型将依据实体间的关系类型构建多视图的... 文档级关系抽取旨在从文档中提取多个实体之间的关系。针对现有工作在不同关系类型的条件下,对于实体间的多跳推理能力受限的问题,提出了一种基于多关系视图轴向注意力的文档级关系抽取模型。该模型将依据实体间的关系类型构建多视图的邻接矩阵,并基于该多视图的邻接矩阵进行多跳推理。基于两个文档级关系抽取基准数据集GDA和DocRED分别进行实验,结果表明,所提模型在生物数据集GDA上的F1指标达到85.7%,性能明显优于基线模型;在DocRED数据集上也能够有效捕获实体间的多跳关系。 展开更多
关键词 关系抽取 文档级 轴向注意力 多视图 多跳推理
下载PDF
上一页 1 2 14 下一页 到第
使用帮助 返回顶部