With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts of...With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts often find it problematic to elicit useful information of interest.In this paper,we study a query-focused summarization as a solution to address this issue and propose a novel summarization framework to generate personalized online summaries and historical summaries of arbitrary time durations.Our framework can deal with dynamic,perpetual,and large-scale microblogging streams.Specifically,we propose an online microblogging stream clustering algorithm to cluster microblogs and maintain distilled statistics called Microblog Cluster Vectors(MCV).Then we develop a ranking method to extract the most representative sentences relative to the query from the MCVs and generate a query-focused summary of arbitrary time durations.Our experiments on large-scale real microblogs demonstrate the efficiency and effectiveness of our approach.展开更多
Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract ...Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract of the entire video that includes the most representative frames is known as static video summarization.This method resulted in rapid exploration,indexing,and retrieval of massive video libraries.We propose a framework for static video summary based on a Binary Robust Invariant Scalable Keypoint(BRISK)and bisecting K-means clustering algorithm.The current method effectively recognizes relevant frames using BRISK by extracting keypoints and the descriptors from video sequences.The video frames’BRISK features are clustered using a bisecting K-means,and the keyframe is determined by selecting the frame that is most near the cluster center.Without applying any clustering parameters,the appropriate clusters number is determined using the silhouette coefficient.Experiments were carried out on a publicly available open video project(OVP)dataset that contained videos of different genres.The proposed method’s effectiveness is compared to existing methods using a variety of evaluation metrics,and the proposed method achieves a trade-off between computational cost and quality.展开更多
In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain th...In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods.展开更多
This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a l...This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a local thesaurus [2] in the selection of the most appropriate extractive text summarization for a particular document. Sentences were tagged and normalized then subjected to the Longest Common Subsequence (LCS) algorithm [3] [4] for the selection of the most similar subset of sentences. Calculated similarity was based on LCS of pairs of sentences that make up the document. A normalized score was calculated and used to rank sentences. A selected top subset of the most similar sentences was then tokenized to produce a set of important keywords or terms. The produced terms were further expanded into two subsets using 1) WorldNet;and 2) a local electronic dictionary/thesaurus. The three sets obtained (the original and the expanded two) were then re-cycled to further refine and expand the list of selected sentences from the original document. The process was repeated a number of times in order to find the best representative set of sentences. A final set of the top (best) sentences was selected as candidate sentences for summarization. In order to verify the utility of the procedure, a number of experiments were conducted using an email corpus. The results were compared to those produced by human annotators as well as to results produced using some basic sentences similarity calculation method. Produced results were very encouraging and compared well to those of human annotators and Jacquard sentences similarity.展开更多
This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inve...This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user.展开更多
The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because the...The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because they are robust and independent of the language used. An extractive summary is obtained by selecting sentences of the original source based on information content. This selection can be automated using a classification function induced by a machine learning algorithm. This function classifies sentences into two groups: important or non-important. The important sentences then form the summary. But, the efficiency of this function directly depends on the used training set to induce it. This paper proposes an original way of optimizing this training set by inserting lexemes obtained from ontological knowledge bases. The training set optimized is reinforced by ontological knowledge. An experiment with four machine learning algorithms was made to validate this proposition. The improvement achieved is clearly significant for each of these algorithms.展开更多
Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process...Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.展开更多
Nowadays,people use online resources such as educational videos and courses.However,such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents(visual,audio,and subtitles)coul...Nowadays,people use online resources such as educational videos and courses.However,such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents(visual,audio,and subtitles)could be analyzed to generate textual summaries,i.e.,notes.Videos’subtitles contain significant information.Therefore,summarizing subtitles is effective to concentrate on the necessary details.Most of the existing studies used Term Frequency-Inverse Document Frequency(TF-IDF)and Latent Semantic Analysis(LSA)models to create lectures’summaries.This study takes another approach and applies LatentDirichlet Allocation(LDA),which proved its effectiveness in document summarization.Specifically,the proposed LDA summarization model follows three phases.The first phase aims to prepare the subtitle file for modelling by performing some preprocessing steps,such as removing stop words.In the second phase,the LDA model is trained on subtitles to generate the keywords list used to extract important sentences.Whereas in the third phase,a summary is generated based on the keywords list.The generated summaries by LDA were lengthy;thus,a length enhancement method has been proposed.For the evaluation,the authors developed manual summaries of the existing“EDUVSUM”educational videos dataset.The authors compared the generated summaries with the manual-generated outlines using two methods,(i)Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and(ii)human evaluation.The performance of LDA-based generated summaries outperforms the summaries generated by TF-IDF and LSA.Besides reducing the summaries’length,the proposed length enhancement method did improve the summaries’precision rates.Other domains,such as news videos,can apply the proposed method for video summarization.展开更多
With the remarkable growth of textual data sources in recent years,easy,fast,and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text...With the remarkable growth of textual data sources in recent years,easy,fast,and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text documents into shorter summaries for easier review of its core contents,which must be done without losing important features and information.This paper introduces a new hybrid method for extractive text summarization with feature selection based on text structure.The major advantage of the proposed summarization method over previous systems is the modeling of text structure and relationship between entities in the input text,which improves the sentence feature selection process and leads to the generation of unambiguous,concise,consistent,and coherent summaries.The paper also presents the results of the evaluation of the proposed method based on precision and recall criteria.It is shown that the method produces summaries consisting of chains of sentences with the aforementioned characteristics from the original text.展开更多
Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information ...Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder.展开更多
基金This work was supported by Chongqing Research Program of Basic Research and Frontier Technology(cstc2017jcyjAX0071)Basic and Advanced Research Projects of CSTC(cstc2019jcyjzdxm0102)+1 种基金Chongqing Science and Technology Innovation Leading Talent Support Program(CSTCCXLJRC201908)Science and Technology Research Program of Chongqing Municipal Education Commission(KJZD-K201900605).
文摘With the number of social media users ramping up,microblogs are generated and shared at record levels.The high momentum and large volumes of short texts bring redundancies and noises,in which the users and analysts often find it problematic to elicit useful information of interest.In this paper,we study a query-focused summarization as a solution to address this issue and propose a novel summarization framework to generate personalized online summaries and historical summaries of arbitrary time durations.Our framework can deal with dynamic,perpetual,and large-scale microblogging streams.Specifically,we propose an online microblogging stream clustering algorithm to cluster microblogs and maintain distilled statistics called Microblog Cluster Vectors(MCV).Then we develop a ranking method to extract the most representative sentences relative to the query from the MCVs and generate a query-focused summary of arbitrary time durations.Our experiments on large-scale real microblogs demonstrate the efficiency and effectiveness of our approach.
基金The authors would like to thank Research Supporting Project Number(RSP2024R444)King Saud University,Riyadh,Saudi Arabia.
文摘Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract of the entire video that includes the most representative frames is known as static video summarization.This method resulted in rapid exploration,indexing,and retrieval of massive video libraries.We propose a framework for static video summary based on a Binary Robust Invariant Scalable Keypoint(BRISK)and bisecting K-means clustering algorithm.The current method effectively recognizes relevant frames using BRISK by extracting keypoints and the descriptors from video sequences.The video frames’BRISK features are clustered using a bisecting K-means,and the keyframe is determined by selecting the frame that is most near the cluster center.Without applying any clustering parameters,the appropriate clusters number is determined using the silhouette coefficient.Experiments were carried out on a publicly available open video project(OVP)dataset that contained videos of different genres.The proposed method’s effectiveness is compared to existing methods using a variety of evaluation metrics,and the proposed method achieves a trade-off between computational cost and quality.
文摘In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods.
文摘This paper investigates a procedure developed and reports on experiments performed to studying the utility of applying a combined structural property of a text’s sentences and term expansion using WordNet [1] and a local thesaurus [2] in the selection of the most appropriate extractive text summarization for a particular document. Sentences were tagged and normalized then subjected to the Longest Common Subsequence (LCS) algorithm [3] [4] for the selection of the most similar subset of sentences. Calculated similarity was based on LCS of pairs of sentences that make up the document. A normalized score was calculated and used to rank sentences. A selected top subset of the most similar sentences was then tokenized to produce a set of important keywords or terms. The produced terms were further expanded into two subsets using 1) WorldNet;and 2) a local electronic dictionary/thesaurus. The three sets obtained (the original and the expanded two) were then re-cycled to further refine and expand the list of selected sentences from the original document. The process was repeated a number of times in order to find the best representative set of sentences. A final set of the top (best) sentences was selected as candidate sentences for summarization. In order to verify the utility of the procedure, a number of experiments were conducted using an email corpus. The results were compared to those produced by human annotators as well as to results produced using some basic sentences similarity calculation method. Produced results were very encouraging and compared well to those of human annotators and Jacquard sentences similarity.
文摘This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user.
文摘The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because they are robust and independent of the language used. An extractive summary is obtained by selecting sentences of the original source based on information content. This selection can be automated using a classification function induced by a machine learning algorithm. This function classifies sentences into two groups: important or non-important. The important sentences then form the summary. But, the efficiency of this function directly depends on the used training set to induce it. This paper proposes an original way of optimizing this training set by inserting lexemes obtained from ontological knowledge bases. The training set optimized is reinforced by ontological knowledge. An experiment with four machine learning algorithms was made to validate this proposition. The improvement achieved is clearly significant for each of these algorithms.
文摘Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.
文摘Nowadays,people use online resources such as educational videos and courses.However,such videos and courses are mostly long and thus,summarizing them will be valuable.The video contents(visual,audio,and subtitles)could be analyzed to generate textual summaries,i.e.,notes.Videos’subtitles contain significant information.Therefore,summarizing subtitles is effective to concentrate on the necessary details.Most of the existing studies used Term Frequency-Inverse Document Frequency(TF-IDF)and Latent Semantic Analysis(LSA)models to create lectures’summaries.This study takes another approach and applies LatentDirichlet Allocation(LDA),which proved its effectiveness in document summarization.Specifically,the proposed LDA summarization model follows three phases.The first phase aims to prepare the subtitle file for modelling by performing some preprocessing steps,such as removing stop words.In the second phase,the LDA model is trained on subtitles to generate the keywords list used to extract important sentences.Whereas in the third phase,a summary is generated based on the keywords list.The generated summaries by LDA were lengthy;thus,a length enhancement method has been proposed.For the evaluation,the authors developed manual summaries of the existing“EDUVSUM”educational videos dataset.The authors compared the generated summaries with the manual-generated outlines using two methods,(i)Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and(ii)human evaluation.The performance of LDA-based generated summaries outperforms the summaries generated by TF-IDF and LSA.Besides reducing the summaries’length,the proposed length enhancement method did improve the summaries’precision rates.Other domains,such as news videos,can apply the proposed method for video summarization.
文摘With the remarkable growth of textual data sources in recent years,easy,fast,and accurate text processing has become a challenge with significant payoffs.Automatic text summarization is the process of compressing text documents into shorter summaries for easier review of its core contents,which must be done without losing important features and information.This paper introduces a new hybrid method for extractive text summarization with feature selection based on text structure.The major advantage of the proposed summarization method over previous systems is the modeling of text structure and relationship between entities in the input text,which improves the sentence feature selection process and leads to the generation of unambiguous,concise,consistent,and coherent summaries.The paper also presents the results of the evaluation of the proposed method based on precision and recall criteria.It is shown that the method produces summaries consisting of chains of sentences with the aforementioned characteristics from the original text.
文摘Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder.