期刊文献+
共找到74篇文章
< 1 2 4 >
每页显示 20 50 100
A time-aware query-focused summarization of an evolving microblogging stream via sentence extraction
1
作者 Fei Geng Qilie Liu Ping Zhang 《Digital Communications and Networks》 SCIE 2020年第3期389-397,共9页
With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts of... With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts often find it problematic to elicit useful information of interest.In this paper,we study a query-focused summarization as a solution to address this issue and propose a novel summarization framework to generate personalized online summaries and historical summaries of arbitrary time durations.Our framework can deal with dynamic,perpetual,and large-scale microblogging streams.Specifically,we propose an online microblogging stream clustering algorithm to cluster microblogs and maintain distilled statistics called Microblog Cluster Vectors(MCV).Then we develop a ranking method to extract the most representative sentences relative to the query from the MCVs and generate a query-focused summary of arbitrary time durations.Our experiments on large-scale real microblogs demonstrate the efficiency and effectiveness of our approach. 展开更多
关键词 Microblog Query-focused summarization Computational linguistics sentence extraction Personalized pagerank
下载PDF
TWO-STAGE SENTENCE SELECTION APPROACH FOR MULTI-DOCUMENT SUMMARIZATION
2
作者 Zhang Shu Zhao Tiejun Zheng Dequan Zhao Hua 《Journal of Electronics(China)》 2008年第4期562-567,共6页
Compared with the traditional method of adding sentences to get summary in multi-document summarization,a two-stage sentence selection approach based on deleting sentences in acandidate sentence set to generate summar... Compared with the traditional method of adding sentences to get summary in multi-document summarization,a two-stage sentence selection approach based on deleting sentences in acandidate sentence set to generate summary is proposed,which has two stages,the acquisition of acandidate sentence set and the optimum selection of sentence.At the first stage,the candidate sentenceset is obtained by redundancy-based sentence selection approach.At the second stage,optimum se-lection of sentences is proposed to delete sentences in the candidate sentence set according to itscontribution to the whole set until getting the appointed summary length.With a test corpus,theROUGE value of summaries gotten by the proposed approach proves its validity,compared with thetraditional method of sentence selection.The influence of the token chosen in the two-stage sentenceselection approach on the quality of the generated summaries is analyzed. 展开更多
关键词 文字信息处理 自动摘要 文本处理 句子选择方法 多文档摘要
下载PDF
Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning
3
作者 Yuxin HUANG Huailing GU +3 位作者 Zhengtao YU Yumeng GAO Tong PAN Jialong XU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期121-134,共14页
Cross-lingual summarization(CLS)is the task of generating a summary in a target language from a document in a source language.Recently,end-to-end CLS models have achieved impressive results using large-scale,high-qual... Cross-lingual summarization(CLS)is the task of generating a summary in a target language from a document in a source language.Recently,end-to-end CLS models have achieved impressive results using large-scale,high-quality datasets typically constructed by translating monolingual summary corpora into CLS corpora.However,due to the limited performance of low-resource language translation models,translation noise can seriously degrade the performance of these models.In this paper,we propose a fine-grained reinforcement learning approach to address low-resource CLS based on noisy data.We introduce the source language summary as a gold signal to alleviate the impact of the translated noisy target summary.Specifically,we design a reinforcement reward by calculating the word correlation and word missing degree between the source language summary and the generated target language summary,and combine it with cross-entropy loss to optimize the CLS model.To validate the performance of our proposed model,we construct Chinese-Vietnamese and Vietnamese-Chinese CLS datasets.Experimental results show that our proposed model outperforms the baselines in terms of both the ROUGE score and BERTScore. 展开更多
关键词 cross-lingual summarization Low-resource language Noisy data Fine-grained reinforcement learning Word correlation Word missing degree
原文传递
Evolutionary Algorithm for Extractive Text Summarization 被引量:1
4
作者 Rasim ALGULIEV Ramiz ALIGULIYEV 《Intelligent Information Management》 2009年第2期128-138,共11页
Text summarization is the process of automatically creating a compressed version of a given document preserving its information content. There are two types of summarization: extractive and abstractive. Extractive sum... Text summarization is the process of automatically creating a compressed version of a given document preserving its information content. There are two types of summarization: extractive and abstractive. Extractive summarization methods simplify the problem of summarization into the problem of selecting a representative subset of the sentences in the original documents. Abstractive summarization may compose novel sentences, unseen in the original sources. In our study we focus on sentence based extractive document summarization. The extractive summarization systems are typically based on techniques for sentence extraction and aim to cover the set of sentences that are most important for the overall understanding of a given document. In this paper, we propose unsupervised document summarization method that creates the summary by clustering and extracting sentences from the original document. For this purpose new criterion functions for sentence clustering have been proposed. Similarity measures play an increasingly important role in document clustering. Here we’ve also developed a discrete differential evolution algorithm to optimize the criterion functions. The experimental results show that our suggested approach can improve the performance compared to sate-of-the-art summarization approaches. 展开更多
关键词 sentence CLUSTERING document summarization DISCRETE DIFFERENTIAL EVOLUTION algorithm
下载PDF
Extractive Summarization Using Structural Syntax, Term Expansion and Refinement
5
作者 Mohamed Taybe Elhadi 《International Journal of Intelligence Science》 2017年第3期55-71,共17页
This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a l... This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a local thesaurus [2] in the selection of the most appropriate extractive text summarization for a particular document. Sentences were tagged and normalized then subjected to the Longest Common Subsequence (LCS) algorithm [3] [4] for the selection of the most similar subset of sentences. Calculated similarity was based on LCS of pairs of sentences that make up the document. A normalized score was calculated and used to rank sentences. A selected top subset of the most similar sentences was then tokenized to produce a set of important keywords or terms. The produced terms were further expanded into two subsets using 1) WorldNet;and 2) a local electronic dictionary/thesaurus. The three sets obtained (the original and the expanded two) were then re-cycled to further refine and expand the list of selected sentences from the original document. The process was repeated a number of times in order to find the best representative set of sentences. A final set of the top (best) sentences was selected as candidate sentences for summarization. In order to verify the utility of the procedure, a number of experiments were conducted using an email corpus. The results were compared to those produced by human annotators as well as to results produced using some basic sentences similarity calculation method. Produced results were very encouraging and compared well to those of human annotators and Jacquard sentences similarity. 展开更多
关键词 Data Extractive summarization Syntactical Structures sentence Similarity Longest Common SUBSEQUENCE TERM EXPANSION WORDNET Local THESAURUS
下载PDF
Density peaks clustering based integrate framework for multi-document summarization 被引量:2
6
作者 BaoyanWang Jian Zhang +1 位作者 Yi Liu Yuexian Zou 《CAAI Transactions on Intelligence Technology》 2017年第1期26-30,共5页
关键词 动态规划 计算机技术 人工智能 发展现状
下载PDF
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
7
作者 Liqiang Jing Yiren Li +3 位作者 Junhao Xu Yongcan Yu Pei Shen Xuemeng Song 《Machine Intelligence Research》 EI CSCD 2023年第2期289-298,共10页
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM... Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines. 展开更多
关键词 Multimodal sentence summarization(MMSS) generative pre-trained language model(GPLM) natural language generation deep learning artificial intelligence
原文传递
Construction of an Automatic Bengali Text Summarizer Using Machine Learning Approaches
8
作者 Busrat Jahan Mahfuja Khatun +2 位作者 Zinat Ara Zabu Afranul Hoque Sayed Uddin Rayhan 《Journal of Data Analysis and Information Processing》 2022年第1期43-57,共15页
In our study, we chose python as the programming platform for finding an Automatic Bengali Document Summarizer. English has sufficient tools to process and receive summarized records. However, there is no specifically... In our study, we chose python as the programming platform for finding an Automatic Bengali Document Summarizer. English has sufficient tools to process and receive summarized records. However, there is no specifically applicable to Bengali since Bengali has a lot of ambiguity, it differs from English in terms of grammar. Afterward, this language holds an important place because this language is spoken by 26 core people all over the world. As a result, it has taken a new method to summarize Bengali documents. The proposed system has been designed by using the following stages: pre-processing the sample doc/input doc, word tagging, pronoun replacement, sentence ranking, as well as summary. Pronoun replacement has been used to reduce the incidence of swinging pronouns in the performance review. We ranked sentences based on sentence frequency, numerical figures, and pronoun replacement. Checking the similarity between two sentences in order to exclude one since it has less duplication. Hereby, we’ve taken 3000 data as input from newspaper and book documents and learned the words to be appropriate with syntax. In addition, to evaluate the performance of the designed summarizer, the design system looked at the different documents. According to the assessment method, the recall, precision, and F-score were 0.70, 0.82 and 0.74, respectively, representing 70%, 82% and 74% recall, precision, and F-score. It has been found that the proper pronoun replacement was 72%. 展开更多
关键词 Natural Language Processing Formatting Bangla Text summarizer Bengali Language Processing Word Tagging Pronoun Replacement sentence Ranking
下载PDF
结合主题和位置信息的两阶段文本摘要模型
9
作者 任淑霞 张靖 +1 位作者 赵宗现 饶冬章 《智能计算机与应用》 2023年第9期158-163,共6页
预训练模型BERT显著提升了文本摘要领域模型的性能,但是其在探索文档全局语义方面和对句子位置信息的利用方面还存在着不足。为了解决以上问题,本文提出了一种结合双主题嵌入和句子绝对位置嵌入的两阶段自动摘要生成模型。首先,文章在... 预训练模型BERT显著提升了文本摘要领域模型的性能,但是其在探索文档全局语义方面和对句子位置信息的利用方面还存在着不足。为了解决以上问题,本文提出了一种结合双主题嵌入和句子绝对位置嵌入的两阶段自动摘要生成模型。首先,文章在两个阶段分别引入主题嵌入,融合了丰富的语义特征,以捕获更准确的全局语义;其次,在抽取式阶段引入句子绝对位置嵌入将句子位置信息进行完全整合,获得更全面的摘要抽取辅助信息以进行摘要提取;在此基础上,模型采用抽取-生成两阶段混合式摘要框架,通过抽取阶段对文本重要信息的提取降低生成摘要内容的冗余性,并进一步提高了模型的性能。在CNN/Daily Mail数据集上实验结果表明,本文提出的模型取得了较好的结果。 展开更多
关键词 混合式摘要 BERT 双主题嵌入 句子绝对位置嵌入
下载PDF
融合回答者排序得分的CQA答案摘要方法
10
作者 丁邱 严馨 +2 位作者 刘艳超 徐广义 邓忠莹 《陕西理工大学学报(自然科学版)》 2023年第5期38-46,共9页
针对现有答案摘要方法对句子建模不够充分,并且忽略了回答者相关信息在摘要过程中的作用,提出了融合回答者排序得分的CQA答案摘要方法。首先,使用RoBERTa-wwm结合平均池化对句子进行编码获取句子深层语义表示;然后,使用DUM专家推荐方法... 针对现有答案摘要方法对句子建模不够充分,并且忽略了回答者相关信息在摘要过程中的作用,提出了融合回答者排序得分的CQA答案摘要方法。首先,使用RoBERTa-wwm结合平均池化对句子进行编码获取句子深层语义表示;然后,使用DUM专家推荐方法依据回答者相关信息对回答者进行排序,依据排序结果求得回答者排序得分;最后,综合句子相关性、句子新颖性、回答者排序得分计算句子综合得分,根据句子综合得分使用MMR思想迭代选取句子构成答案摘要。实验表明,使用RoBERTa-wwm结合平均池化能更好地获取到答案句的深层语义信息,综合3个评分选取摘要句既能考虑答案与问题、答案与答案间的交互,也能较好地融入回答者的信息,有效地改善了答案摘要的质量。 展开更多
关键词 问答社区 答案摘要 RoBERTa-wwm 句子相关性 句子新颖性 回答者排序得分
下载PDF
一种主题句发现的中文自动文摘研究 被引量:8
11
作者 王萌 李春贵 +1 位作者 唐培和 王晓荣 《计算机工程》 CAS CSCD 北大核心 2007年第8期180-181,189,共3页
提出了一种基于主题句发现的中文自动文摘方法。该方法使用术语代替传统的词语作为最小语义单位,采用术语长度术语频率方法进行术语权重计算,获得特征词。利用一种改进的k-means聚类算法进行句子聚类,根据聚类结果进行主题句发现。实验... 提出了一种基于主题句发现的中文自动文摘方法。该方法使用术语代替传统的词语作为最小语义单位,采用术语长度术语频率方法进行术语权重计算,获得特征词。利用一种改进的k-means聚类算法进行句子聚类,根据聚类结果进行主题句发现。实验表明,该算法所得到的文摘,在各项指标上优于传统的文摘。 展开更多
关键词 主题句发现 自动文摘 句子聚类 自然语言处理
下载PDF
基于主题词权重和句子特征的自动文摘 被引量:17
12
作者 蒋昌金 彭宏 +1 位作者 陈建超 马千里 《华南理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 2010年第7期50-55,共6页
为获得高质量的自动文摘,在组合词识别算法的基础上,充分考虑词的频率、词性、词的位置、词长等因素,构建了一个词语权重计算公式,该公式能使表达主题的词和短语具有较高的权重.对句子权重的计算,则考虑了句子的内容、位置以及线索词的... 为获得高质量的自动文摘,在组合词识别算法的基础上,充分考虑词的频率、词性、词的位置、词长等因素,构建了一个词语权重计算公式,该公式能使表达主题的词和短语具有较高的权重.对句子权重的计算,则考虑了句子的内容、位置以及线索词的作用和用户偏好等.摘要的生成充分考虑了候选文摘句的相似性,避免了冗余信息的加入.对摘要的评估进行了从句子粒度到词语粒度的改进,提出了一种基于词语粒度的准确率和召回率计算方法.实验证明,该算法生成的自动文摘有着较高的质量,平均准确率达到77.1%. 展开更多
关键词 主题词 自动文摘 组合词 权重计算 句子特征
下载PDF
多文档文摘中句子优化选择方法研究 被引量:13
13
作者 秦兵 刘挺 +1 位作者 陈尚林 李生 《计算机研究与发展》 EI CSCD 北大核心 2006年第6期1129-1134,共6页
在多文档文摘子主题划分的基础上,提出了一种在子主题之间对文摘句优化选择的方法·首先在句子相似度计算的基础上,形成多文档集合的子主题,通过对各子主题打分,确定子主题的抽取顺序·以文摘中有效词的覆盖率作为优化指标,在... 在多文档文摘子主题划分的基础上,提出了一种在子主题之间对文摘句优化选择的方法·首先在句子相似度计算的基础上,形成多文档集合的子主题,通过对各子主题打分,确定子主题的抽取顺序·以文摘中有效词的覆盖率作为优化指标,在各个子主题中选择文摘句·从减少子主题之间及子主题内部的信息的冗余性两个角度选择文摘句,使文摘的信息覆盖率得到很大提高·实验表明,生成的文摘是令人满意的· 展开更多
关键词 多文档文摘 子主题 句子优化选择
下载PDF
主题模型LDA的多文档自动文摘 被引量:23
14
作者 杨潇 马军 +2 位作者 杨同峰 杜言琦 邵海敏 《智能系统学报》 2010年第2期169-176,共8页
近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA(latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型... 近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA(latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型中句子的主题概率分布和主题的词汇概率分布,以句子中主题权重的加和确定各个主题的重要程度,并根据LDA模型中主题的概率分布和句子的概率分布提出了2种不同的句子权重计算模型.实验中使用ROUGE评测标准,与代表最新水平的SumBasic方法和其他2种基于LDA的多文档自动文摘方法在通用型多文档摘要测试集DUC2002上的评测数据进行比较,结果表明提出的基于LDA的多文档自动文摘方法在ROUGE的各个评测标准上均优于SumBasic方法,与其他基于LDA模型的文摘相比也具有优势. 展开更多
关键词 多文档自动文摘 句子分值计算 主题模型 LDA 主题数目
下载PDF
权衡熵和相关度的自动摘要技术研究 被引量:9
15
作者 罗文娟 马慧芳 +1 位作者 何清 史忠植 《中文信息学报》 CSCD 北大核心 2011年第5期9-16,共8页
生成高质量的文档摘要需要用简约而不丢失信息的描述文档,是自动摘要技术的一大难题。该文认为高质量的文档摘要必须尽量多的覆盖原始文档中的信息,同时尽可能的保持紧凑。从这一角度出发,从文档中抽取出熵和相关度这两组特征用以权衡... 生成高质量的文档摘要需要用简约而不丢失信息的描述文档,是自动摘要技术的一大难题。该文认为高质量的文档摘要必须尽量多的覆盖原始文档中的信息,同时尽可能的保持紧凑。从这一角度出发,从文档中抽取出熵和相关度这两组特征用以权衡摘要的信息覆盖率和紧凑性。该文采用基于回归的有监督摘要技术对提取的特征进行权衡,并且采用单文档摘要和多文档摘要进行了系统的实验。实验结果证明对于单文档摘要和多文档摘要,权衡熵和相关度均能有效地提高文档摘要的质量。 展开更多
关键词 自动摘要 句子特征抽取 相关度
下载PDF
基于加权TextRank的中文自动文本摘要 被引量:20
16
作者 黄波 刘传才 《计算机应用研究》 CSCD 北大核心 2020年第2期407-410,共4页
现有中文自动文本摘要方法主要是利用文本自身信息,其缺陷是不能充分利用词语之间的语义相关等信息。鉴于此,提出了一种改进的中文文本摘要方法。此方法将外部语料库信息用词向量的形式融入到TextRank算法中,通过TextRank与word2vec的结... 现有中文自动文本摘要方法主要是利用文本自身信息,其缺陷是不能充分利用词语之间的语义相关等信息。鉴于此,提出了一种改进的中文文本摘要方法。此方法将外部语料库信息用词向量的形式融入到TextRank算法中,通过TextRank与word2vec的结合,把句子中每个词语映射到高维词库形成句向量。充分考虑了句子之间的相似度、关键词的覆盖率和句子与标题的相似度等因素,以此计算句子之间的影响权重,并选取排序最靠前的句子重新排序作为文本的摘要。实验结果表明,此方法在本数据集中取得了较好的效果,自动提取中文摘要的效果比原方法好。 展开更多
关键词 文本摘要 TextRank 词向量 句子相似度
下载PDF
一种基于主题词集的自动文摘方法 被引量:6
17
作者 刘兴林 郑启伦 马千里 《计算机应用研究》 CSCD 北大核心 2011年第4期1322-1324,共3页
提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘。该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句... 提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘。该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句子,最后按原文顺序输出文摘。实验在哈工大信息检索研究室单文档自动文摘语料库上进行,使用内部评测自动评估方法对获得的文摘进行评价,总体F值达到了66.07%。实验结果表明,该方法所获得的文摘质量高,较接近于参考文摘,取得了良好的效果。 展开更多
关键词 自动文摘 主题词集 句子权重 自然语言处理
下载PDF
基于潜在狄利克雷分布模型的多文档情感摘要 被引量:9
18
作者 荀静 刘培玉 +1 位作者 杨玉珍 张艳辉 《计算机应用》 CSCD 北大核心 2014年第6期1636-1640,共5页
针对当前方法难以获取评论文本全局情感倾向性的问题,提出一种基于潜在狄利克雷分布(LDA)模型的多文档情感摘要方法。该方法首先对给定的句子进行情感分析,抽取带有主观性评价的句子;然后,应用LDA模型表示已抽取的句子,并通过词汇的重... 针对当前方法难以获取评论文本全局情感倾向性的问题,提出一种基于潜在狄利克雷分布(LDA)模型的多文档情感摘要方法。该方法首先对给定的句子进行情感分析,抽取带有主观性评价的句子;然后,应用LDA模型表示已抽取的句子,并通过词汇的重要度和句子的特征计算句子的权重;最终提取情感文摘。实验结果表明,该方法能够有效地识别情感关键句,在准确率、召回率和F值上均有不错的效果。 展开更多
关键词 潜在狄利克雷分布模型 主观句子 情感分析 多文档摘要
下载PDF
一种基于演化算法进行句子抽取的多文档自动摘要系统SBGA 被引量:10
19
作者 刘德喜 何炎祥 +1 位作者 姬东鸿 杨华 《中文信息学报》 CSCD 北大核心 2006年第6期46-53,共8页
SBGA系统将多文档自动摘要过程视为一个从源文档集中抽取句子的组合优化过程,并用演化算法来求得近似最优解。与基于聚类的句子抽取方法相比,基于演化算法进行句子抽取的方法是面向摘要整体的,因此能获得更好的近似最优摘要。演化算法... SBGA系统将多文档自动摘要过程视为一个从源文档集中抽取句子的组合优化过程,并用演化算法来求得近似最优解。与基于聚类的句子抽取方法相比,基于演化算法进行句子抽取的方法是面向摘要整体的,因此能获得更好的近似最优摘要。演化算法的评价函数中考虑了衡量摘要的4个标准:长度符合用户要求、信息覆盖率高、更多地保留原文传递的重要信息、无冗余。另外,为了提高词频计算的精度,SBGA采用了一种改进的词频计算方法TFS,将加权后词的同义词频率加到了原词频中。在DUC2004测试数据集上的实验结果表明,基于演化算法进行句子抽取的方法有很好的性能,其ROUGE-1分值比DUC2004最优参赛系统仅低0.55%。改进的词频计算方法TFS对提高文档质量也起到了良好的作用。 展开更多
关键词 计算机应用 中文信息处理 多文档自动摘要 演化算法 句子抽取 评价函数 TFS
下载PDF
文本摘要问题中的句子抽取方法研究 被引量:10
20
作者 张龙凯 王厚峰 《中文信息学报》 CSCD 北大核心 2012年第2期97-101,共5页
抽取式摘要是从正文中按照一定策略抽取重要句子组成摘要。该文提出了一种句子抽取方法。基本思想是将句子的抽取看作序列标注问题,采用条件随机场模型对句子进行二类标注,根据标注结果抽出句子以生成摘要。由于不在摘要中的句子的数量... 抽取式摘要是从正文中按照一定策略抽取重要句子组成摘要。该文提出了一种句子抽取方法。基本思想是将句子的抽取看作序列标注问题,采用条件随机场模型对句子进行二类标注,根据标注结果抽出句子以生成摘要。由于不在摘要中的句子的数量远大于摘要中的句子数量,标注过程倾向于拒绝将句子标注为摘要句,针对此问题该文引入了修正因子进行修正。实验表明该方法具有较好地效果。 展开更多
关键词 文本摘要 句子抽取 条件随机场
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部