期刊文献+
共找到142篇文章
< 1 2 8 >
每页显示 20 50 100
A time-aware query-focused summarization of an evolving microblogging stream via sentence extraction
1
作者 Fei Geng Qilie Liu Ping Zhang 《Digital Communications and Networks》 SCIE 2020年第3期389-397,共9页
With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts of... With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts often find it problematic to elicit useful information of interest.In this paper,we study a query-focused summarization as a solution to address this issue and propose a novel summarization framework to generate personalized online summaries and historical summaries of arbitrary time durations.Our framework can deal with dynamic,perpetual,and large-scale microblogging streams.Specifically,we propose an online microblogging stream clustering algorithm to cluster microblogs and maintain distilled statistics called Microblog Cluster Vectors(MCV).Then we develop a ranking method to extract the most representative sentences relative to the query from the MCVs and generate a query-focused summary of arbitrary time durations.Our experiments on large-scale real microblogs demonstrate the efficiency and effectiveness of our approach. 展开更多
关键词 Microblog query-focused summarization Computational linguistics Sentence extraction Personalized pagerank
下载PDF
Video Summarization Approach Based on Binary Robust Invariant Scalable Keypoints and Bisecting K-Means
2
作者 Sameh Zarif Eman Morad +3 位作者 Khalid Amin Abdullah Alharbi Wail S.Elkilani Shouze Tang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3565-3583,共19页
Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract ... Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract of the entire video that includes the most representative frames is known as static video summarization.This method resulted in rapid exploration,indexing,and retrieval of massive video libraries.We propose a framework for static video summary based on a Binary Robust Invariant Scalable Keypoint(BRISK)and bisecting K-means clustering algorithm.The current method effectively recognizes relevant frames using BRISK by extracting keypoints and the descriptors from video sequences.The video frames’BRISK features are clustered using a bisecting K-means,and the keyframe is determined by selecting the frame that is most near the cluster center.Without applying any clustering parameters,the appropriate clusters number is determined using the silhouette coefficient.Experiments were carried out on a publicly available open video project(OVP)dataset that contained videos of different genres.The proposed method’s effectiveness is compared to existing methods using a variety of evaluation metrics,and the proposed method achieves a trade-off between computational cost and quality. 展开更多
关键词 BRISK bisecting K-mean video summarization keyframe extraction shot detection
下载PDF
A Hybrid Method of Extractive Text Summarization Based on Deep Learning and Graph Ranking Algorithms 被引量:1
3
作者 SHI Hui WANG Tiexin 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2022年第S01期158-165,共8页
In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain th... In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods. 展开更多
关键词 extractive text summarization deep learning sentence embeddings LexRank
下载PDF
Extractive Summarization Using Structural Syntax, Term Expansion and Refinement
4
作者 Mohamed Taybe Elhadi 《International Journal of Intelligence Science》 2017年第3期55-71,共17页
This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a l... This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a local thesaurus [2] in the selection of the most appropriate extractive text summarization for a particular document. Sentences were tagged and normalized then subjected to the Longest Common Subsequence (LCS) algorithm [3] [4] for the selection of the most similar subset of sentences. Calculated similarity was based on LCS of pairs of sentences that make up the document. A normalized score was calculated and used to rank sentences. A selected top subset of the most similar sentences was then tokenized to produce a set of important keywords or terms. The produced terms were further expanded into two subsets using 1) WorldNet;and 2) a local electronic dictionary/thesaurus. The three sets obtained (the original and the expanded two) were then re-cycled to further refine and expand the list of selected sentences from the original document. The process was repeated a number of times in order to find the best representative set of sentences. A final set of the top (best) sentences was selected as candidate sentences for summarization. In order to verify the utility of the procedure, a number of experiments were conducted using an email corpus. The results were compared to those produced by human annotators as well as to results produced using some basic sentences similarity calculation method. Produced results were very encouraging and compared well to those of human annotators and Jacquard sentences similarity. 展开更多
关键词 Data extractive summarization Syntactical Structures Sentence Similarity Longest Common SUBSEQUENCE TERM EXPANSION WORDNET Local THESAURUS
下载PDF
A Deep Look into Extractive Text Summarization
5
作者 Jhonathan Quillo-Espino Rosa María Romero-González Ana-Marcela Herrera-Navarro 《Journal of Computer and Communications》 2021年第6期24-37,共14页
This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inve... This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user. 展开更多
关键词 Text Mining Preprocesses Text summarization extractive Text Sumarization
下载PDF
Insertion of Ontological Knowledge to Improve Automatic Summarization Extraction Methods
6
作者 Jésus Antonio Motta Laurence Capus Nicole Tourigny 《Journal of Intelligent Learning Systems and Applications》 2011年第3期131-138,共8页
The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because the... The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because they are robust and independent of the language used. An extractive summary is obtained by selecting sentences of the original source based on information content. This selection can be automated using a classification function induced by a machine learning algorithm. This function classifies sentences into two groups: important or non-important. The important sentences then form the summary. But, the efficiency of this function directly depends on the used training set to induce it. This paper proposes an original way of optimizing this training set by inserting lexemes obtained from ontological knowledge bases. The training set optimized is reinforced by ontological knowledge. An experiment with four machine learning algorithms was made to validate this proposition. The improvement achieved is clearly significant for each of these algorithms. 展开更多
关键词 Automatic summarization ONTOLOGY MACHINE Learning extraction Method
下载PDF
Automatic Text Summarization Using Genetic Algorithm and Repetitive Patterns 被引量:2
7
作者 Ebrahim Heidary Hamïd Parvïn +4 位作者 Samad Nejatian Karamollah Bagherifard Vahideh Rezaie Zulkefli Mansor Kim-Hung Pho 《Computers, Materials & Continua》 SCIE EI 2021年第4期1085-1101,共17页
Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process... Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document. 展开更多
关键词 Natural language processing extractive summarization features optimization repetitive patterns genetic algorithm
下载PDF
Educational Videos Subtitles’Summarization Using Latent Dirichlet Allocation and Length Enhancement 被引量:1
8
作者 Sarah S.Alrumiah Amal A.Al-Shargabi 《Computers, Materials & Continua》 SCIE EI 2022年第3期6205-6221,共17页
Nowadays,people use online resources such as educational videos and courses.However,such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents(visual,audio,and subtitles)coul... Nowadays,people use online resources such as educational videos and courses.However,such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents(visual,audio,and subtitles)could be analyzed to generate textual summaries,i.e.,notes.Videos’subtitles contain significant information.Therefore,summarizing subtitles is effective to concentrate on the necessary details.Most of the existing studies used Term Frequency-Inverse Document Frequency(TF-IDF)and Latent Semantic Analysis(LSA)models to create lectures’summaries.This study takes another approach and applies LatentDirichlet Allocation(LDA),which proved its effectiveness in document summarization.Specifically,the proposed LDA summarization model follows three phases.The first phase aims to prepare the subtitle file for modelling by performing some preprocessing steps,such as removing stop words.In the second phase,the LDA model is trained on subtitles to generate the keywords list used to extract important sentences.Whereas in the third phase,a summary is generated based on the keywords list.The generated summaries by LDA were lengthy;thus,a length enhancement method has been proposed.For the evaluation,the authors developed manual summaries of the existing“EDUVSUM”educational videos dataset.The authors compared the generated summaries with the manual-generated outlines using two methods,(i)Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and(ii)human evaluation.The performance of LDA-based generated summaries outperforms the summaries generated by TF-IDF and LSA.Besides reducing the summaries’length,the proposed length enhancement method did improve the summaries’precision rates.Other domains,such as news videos,can apply the proposed method for video summarization. 展开更多
关键词 Subtitle summarization educational videos topic modelling LDA extractive summarization
下载PDF
Automatic Persian Text Summarization Using Linguistic Features from Text Structure Analysis 被引量:1
9
作者 Ebrahim Heidary Hamïd Parvïn +2 位作者 Samad Nejatian Karamollah Bagherifard Vahideh Rezaie 《Computers, Materials & Continua》 SCIE EI 2021年第12期2845-2861,共17页
With the remarkable growth of textual data sources in recent years,easy,fast,and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text... With the remarkable growth of textual data sources in recent years,easy,fast,and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text documents into shorter summaries for easier review of its core contents,which must be done without losing important features and information.This paper introduces a new hybrid method for extractive text summarization with feature selection based on text structure.The major advantage of the proposed summarization method over previous systems is the modeling of text structure and relationship between entities in the input text,which improves the sentence feature selection process and leads to the generation of unambiguous,concise,consistent,and coherent summaries.The paper also presents the results of the evaluation of the proposed method based on precision and recall criteria.It is shown that the method produces summaries consisting of chains of sentences with the aforementioned characteristics from the original text. 展开更多
关键词 Natural language processing extractive summarization linguistic feature text structure analysis
下载PDF
RETRACTED:Recent Approaches for Text Summarization Using Machine Learning&LSTM0
10
作者 Neeraj Kumar Sirohi Mamta Bansal S.N.Rajan 《Journal on Big Data》 2021年第1期35-47,共13页
Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information ... Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder. 展开更多
关键词 Text summarization extractive summary abstractive summary NLP LSTM
下载PDF
CINOSUM:面向多民族低资源语言的抽取式摘要模型
11
作者 翁彧 罗皓予 +3 位作者 超木日力格 刘轩 董俊 刘征 《计算机科学》 CSCD 北大核心 2024年第7期296-302,共7页
针对现有的模型无法处理多民族低资源语言自动摘要生成的问题,基于CINO提出了一种面向多民族低资源语言的抽取式摘要模型CINOSUM。为扩大文本摘要的语言范围,首先构建了多种民族语言的摘要数据集MESUM。为解决以往模型在低资源语言上效... 针对现有的模型无法处理多民族低资源语言自动摘要生成的问题,基于CINO提出了一种面向多民族低资源语言的抽取式摘要模型CINOSUM。为扩大文本摘要的语言范围,首先构建了多种民族语言的摘要数据集MESUM。为解决以往模型在低资源语言上效果不佳的问题,构建了一个框架,采用统一的句子抽取器,以进行不同民族语言的抽取式摘要生成。此外,提出采用多语言数据集的联合训练方法,旨在弥补知识获取上的不足,进而扩展在低资源语言上的应用,显著增强模型的适应性与灵活性。最终,在MESUM数据集上开展了广泛的实验研究,实验结果表明CINOSUM模型在包括藏语和维吾尔语在内的多民族低资源语言环境中表现卓越,并且在ROUGE评价体系下取得了显著的性能提升。 展开更多
关键词 抽取式摘要 多语言预训练模型 低资源语言信息处理 知识迁移
下载PDF
基于无监督学习和监督学习的抽取式文本摘要综述 被引量:1
12
作者 夏吾吉 黄鹤鸣 +1 位作者 更藏措毛 范玉涛 《计算机应用》 CSCD 北大核心 2024年第4期1035-1048,共14页
相较于生成式摘要方法,抽取式摘要方法简单易行、可读性强,使用范围广。目前,抽取式摘要方法综述文献仅对特定的某个方法或领域进行分析综述,缺乏多方面、多语种的系统性综述,因此探讨文本摘要生成任务的内涵,通过系统梳理和提炼现有的... 相较于生成式摘要方法,抽取式摘要方法简单易行、可读性强,使用范围广。目前,抽取式摘要方法综述文献仅对特定的某个方法或领域进行分析综述,缺乏多方面、多语种的系统性综述,因此探讨文本摘要生成任务的内涵,通过系统梳理和提炼现有的相关文献,对无监督学习和监督学习的抽取式文本摘要技术进行多维度、全方位的分析。首先,回顾文本摘要技术的发展,分析不同的抽取式文本摘要方法,主要包括基于规则、词频-逆文件概率(TFIDF)、中心性方法、潜在语义、深度学习、图排序、特征工程和预训练学习等,并对比不同方法的差异;其次,详细介绍不同语种文本摘要生成的常用数据集和主流的评价指标,通过不同的实验指标对相同数据集上的方法进行比较;最后,指出当前抽取式文本摘要研究中存在的主要问题和挑战,并提出具体的解决思路和未来发展趋势。 展开更多
关键词 抽取式摘要 无监督学习 监督学习 数据集 评价指标
下载PDF
融合多模态信息的产品摘要抽取模型
13
作者 赵强 王中卿 王红玲 《计算机应用》 CSCD 北大核心 2024年第1期73-78,共6页
在网络购物平台上,简洁、真实、有效的产品摘要对于提升购物体验至关重要。网上购物无法接触到产品实物,产品图像所含信息是除产品文本描述外的重要视觉信息,因此融合包括产品文本和产品图像在内的多模态信息的产品摘要对于网络购物具... 在网络购物平台上,简洁、真实、有效的产品摘要对于提升购物体验至关重要。网上购物无法接触到产品实物,产品图像所含信息是除产品文本描述外的重要视觉信息,因此融合包括产品文本和产品图像在内的多模态信息的产品摘要对于网络购物具有重要的意义。针对融合产品文本描述和产品图像的问题,提出一种融合多模态信息的产品摘要抽取模型。与一般的产品摘要任务的输入只包含产品文本描述不同,该模型引入了产品图像作为一种额外的信息来源,使抽取产生的摘要更丰富。具体来说,首先对产品文本描述和产品图像分别使用预训练模型进行特征表示,从产品文本描述中提取每个句子的文本特征表示,从产品图像中提取产品整体的视觉特征表示;然后使用基于低阶张量的多模态融合方法将每个句子的文本特征和整体视觉特征进行模态融合,得到每个句子的多模态特征表示;最后将所有句子的多模态特征表示输入摘要生成器中以生成最终的产品摘要。在CEPSUM(Chinese E-commerce Product SUMmarization)2.0数据集上进行对比实验,在CEPSUM 2.0的3个数据子集上,该模型的平均ROUGE-1比TextRank高3.12个百分点,比BERTSUMExt(BERT SUMmarization Extractive)高1.75个百分点。实验结果表明,该模型融合产品文本和图像信息对于产品摘要是有效的,在ROUGE评价指标上表现良好。 展开更多
关键词 产品摘要 多模态摘要 抽取式摘要 多模态融合 自动文摘
下载PDF
基于子句单元的异构图网络抽取式文本摘要
14
作者 林群凯 陈钰枫 +2 位作者 徐金安 张玉洁 刘健 《中文信息学报》 CSCD 北大核心 2024年第6期119-128,共10页
的目标是将长文本进行压缩、归纳和总结,从而形成具有概括性含义的短文本,其能帮助人们快速获取文档的主要信息。当前大多数的抽取式文本摘要的研究都是以整句作为抽取单元,而整句作为抽取单元会引入冗余信息,因此该文考虑使用粒度更细... 的目标是将长文本进行压缩、归纳和总结,从而形成具有概括性含义的短文本,其能帮助人们快速获取文档的主要信息。当前大多数的抽取式文本摘要的研究都是以整句作为抽取单元,而整句作为抽取单元会引入冗余信息,因此该文考虑使用粒度更细的抽取单元。已有研究表明,细粒度的子句单元比整句单元在抽取式摘要上更具有优势。结合当下热门的图神经网络,该文提出了一种基于子句单元异构图网络的抽取式摘要模型,有效融合了词、实体和子句单元等不同层次的语言信息,能够实现更细粒度的抽取式摘要。在大规模基准语料库(CNN/DM和NYT)上的实验结果表明,该模型产生了突破性的性能并优于以前的抽取式摘要模型。 展开更多
关键词 子句 异构图 抽取式摘要
下载PDF
基于孪生网络文本语义匹配的多文档摘要
15
作者 钟琪 王中卿 王红玲 《中文信息学报》 CSCD 北大核心 2024年第5期107-116,共10页
多文档摘要旨在从一组主题相关的文档集中抽取出最能代表文档集中心内容的句子作为摘要,文本语义匹配则是指学习两个文本单元之间的语义关系,使句子表征具有更加丰富的语义信息。该文提出了一种基于孪生网络文本语义匹配的多文档抽取式... 多文档摘要旨在从一组主题相关的文档集中抽取出最能代表文档集中心内容的句子作为摘要,文本语义匹配则是指学习两个文本单元之间的语义关系,使句子表征具有更加丰富的语义信息。该文提出了一种基于孪生网络文本语义匹配的多文档抽取式摘要方法,该方法将孪生网络和预训练语言模型BERT相结合,构建一个文本语义匹配与文本摘要联合学习模型。该模型运用孪生网络从不同的视角考察任意两个文本单元之间的语义关联,学习文档集中碎片化的信息,进一步对重要信息进行评估,最后结合文本摘要模型选择出更能代表文档集主要内容的句子组织成摘要。实验结果表明,该文所提方法和当前主流的多文档抽取式摘要方法相比,在ROUGE评价指标上有较大提升。 展开更多
关键词 多文档抽取式摘要 语义关系 预训练语言模型
下载PDF
面向司法文书的抽取-生成式自动摘要模型
16
作者 陈炫言 安娜 +1 位作者 孙宇 周炼赤 《计算机工程与设计》 北大核心 2024年第4期1117-1125,共9页
为解决抽取式摘要核心信息拼接生硬,生成式摘要源文本过长易忽略重要信息等问题,对抽取式摘要和生成式摘要的结合进行研究。通过分析抽取式摘要可提取出文本关键信息且缩短源文本长度特性;生成式摘要可降低序列间信息损失,增加文本关联... 为解决抽取式摘要核心信息拼接生硬,生成式摘要源文本过长易忽略重要信息等问题,对抽取式摘要和生成式摘要的结合进行研究。通过分析抽取式摘要可提取出文本关键信息且缩短源文本长度特性;生成式摘要可降低序列间信息损失,增加文本关联的优势。提出一种面向司法文书的抽取-生成式自动摘要模型,融合模型优势,避免单一模型存在的关键文本信息重复及重组段落语法不准的问题,保障法律文书抽取的切实完整性。在大规模公开法律领域裁判文书数据集上的实验结果表明,该模型获得较高ROUGE得分,表明了该模型提升了摘要质量。 展开更多
关键词 自动摘要 抽取式 生成式 算法融合 裁判文书 法律领域 完整连贯性
下载PDF
基于文本摘要的无监督关键词抽取方法
17
作者 尤泽顺 周喜 +2 位作者 董瑞 张洋宁 杨奉毅 《计算机工程与设计》 北大核心 2024年第9期2779-2784,共6页
为克服基于嵌入的关键词抽取方法在长文档上性能下降的问题,提出一种基于文本摘要的方法(summarization-based document embedding rank,SDERank)。将句向量的加权和作为文档嵌入,根据每个句子与文档主题的语义相关度赋予权重。以往基... 为克服基于嵌入的关键词抽取方法在长文档上性能下降的问题,提出一种基于文本摘要的方法(summarization-based document embedding rank,SDERank)。将句向量的加权和作为文档嵌入,根据每个句子与文档主题的语义相关度赋予权重。以往基于嵌入的方法选择关键词时忽略候选词之间的关联,针对该问题,在SDERank的改进版SDERank+中,PageRank算法被用于提取候选词之间的共现权重作为相似度分数的修正。实验结果表明,在4个广泛使用的数据集上SDERank和SDERank+比之前最好的模型MDERank的F1分数平均高出2.2%和3.29%。 展开更多
关键词 自动关键词抽取 文本摘要 长文档建模 文档主题分析 语义处理 权重优化 向量相似性
下载PDF
基于异构图和关键词的抽取式文本摘要模型
18
作者 朱颀林 王羽 徐建 《电子科技大学学报》 EI CAS CSCD 北大核心 2024年第2期259-270,共12页
抽取式文本摘要使用一定的策略从冗长的文本中选择一些句子组成摘要,其关键在于要尽可能多地利用文本的语义信息和结构信息。为了更好地挖掘这些信息,进而利用它们指导摘要的抽取,提出了一种基于异构图和关键词的抽取式文本摘要模型(HGK... 抽取式文本摘要使用一定的策略从冗长的文本中选择一些句子组成摘要,其关键在于要尽可能多地利用文本的语义信息和结构信息。为了更好地挖掘这些信息,进而利用它们指导摘要的抽取,提出了一种基于异构图和关键词的抽取式文本摘要模型(HGKSum)。该模型首先将文本建模为由句子节点和词语节点构成的异构图,在异构图上使用图注意力网络学习节点的特征,之后将关键词抽取任务作为文本摘要任务的辅助任务,使用多任务学习的方式进行训练,得到候选摘要,最后对候选摘要进行精炼以降低冗余度,得到最终摘要。在基准数据集上的对比实验表明,该模型性能优于基准模型,此外,消融实验也证明了引入异构节点和关键词的必要性。 展开更多
关键词 抽取式文本摘要 异构图 关键词 图注意力网络 多任务学习
下载PDF
基于异构图分层学习的细粒度多文档摘要抽取
19
作者 翁裕源 许柏炎 蔡瑞初 《计算机工程》 CAS CSCD 北大核心 2024年第3期336-344,共9页
抽取的目标是在多个文档中提取共有关键信息,其对简洁性的要求高于单文档摘要抽取。现有的多文档摘要抽取方法通常在句子级别进行建模,容易引入较多的冗余信息。为了解决上述问题,提出一种基于异构图分层学习的多文档摘要抽取框架,通过... 抽取的目标是在多个文档中提取共有关键信息,其对简洁性的要求高于单文档摘要抽取。现有的多文档摘要抽取方法通常在句子级别进行建模,容易引入较多的冗余信息。为了解决上述问题,提出一种基于异构图分层学习的多文档摘要抽取框架,通过层次化构建单词层级图和子句层级图来有效建模语义关系和结构关系。针对单词层级图和子句层级图这2个异构图的学习问题,设计具有不同层次更新机制的两层学习层来降低学习多种结构关系的难度。在单词层级图学习层,提出交替更新机制更新不同的粒度节点,以单词节点为载体通过图注意网络进行语义信息传递;在子句层级图学习层,提出两阶段分步学习更新机制聚合多种结构关系,第一阶段聚合同构关系,第二阶段基于注意力聚合异构关系。实验结果表明,与抽取式基准模型相比,该框架在Multinews数据集上取得了显著的性能提升,ROUGE-1、ROUGE-2和ROUGE-L分别提高0.88、0.23和2.27,消融实验结果也验证了两层学习层及其层次更新机制的有效性。 展开更多
关键词 抽取式多文档摘要 细粒度建模 异构图 分层学习 语义关系 结构关系
下载PDF
面向中文法律裁判文书的抽取式摘要算法
20
作者 温嘉宝 杨敏 《集成技术》 2024年第1期62-71,共10页
裁判文书自动摘要的目的在于让计算机能够自动选择、抽取和压缩法律文本中的重要信息,从而减轻法律从业者的工作量。目前,大多数基于预训练语言模型的摘要算法对输入文本的长度存在限制,因此无法对长文本进行有效摘要。为此,该文提出了... 裁判文书自动摘要的目的在于让计算机能够自动选择、抽取和压缩法律文本中的重要信息,从而减轻法律从业者的工作量。目前,大多数基于预训练语言模型的摘要算法对输入文本的长度存在限制,因此无法对长文本进行有效摘要。为此,该文提出了一种新的抽取式摘要算法,利用预训练语言模型生成句子向量,并基于Transformer编码器结构融合包括句子向量、句子位置和句子长度在内的信息,完成句子摘要。实验结果显示,该算法能够有效处理长文本摘要任务。此外,在2020年中国法律智能技术评测(CAIL)摘要数据集上进行测试的结果表明,与基线模型相比,该模型在ROUGE-1、ROUGE-2和ROUGE-L指标上均有显著提升。 展开更多
关键词 抽取式摘要模型 法律裁判文书 文本自动摘要 深度神经网络
下载PDF
上一页 1 2 8 下一页 到第
使用帮助 返回顶部