Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in r...Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in real systems are based on graph models,which are characterized by their simplicity and stability.Thus,this paper proposes an improved extractive text summarization algorithm based on both topic and graph models.The methodology of this work consists of two stages.First,the well-known TextRank algorithm is analyzed and its shortcomings are investigated.Then,an improved method is proposed with a new computational model of sentence weights.The experimental results were carried out on standard DUC2004 and DUC2006 datasets and compared to four text summarization methods.Finally,through experiments on the DUC2004 and DUC2006 datasets,our proposed improved graph model algorithm TG-SMR(Topic Graph-Summarizer)is compared to other text summarization systems.The experimental results prove that the proposed TG-SMR algorithm achieves higher ROUGE scores.It is foreseen that the TG-SMR algorithm will open a new horizon that concerns the performance of ROUGE evaluation indicators.展开更多
Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informati...Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informative sentences from biomedical articles is always challenging.This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information.The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model.The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them.The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences.The quality of the framework is assessed via different parameters like information retention,coverage,readability,cohesion,and ROUGE scores in clustering and non-clustering modes.The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption.The configurable settings of combined parameters reduce execution time,enhance memory utilization,and extract relevant information outperforming other biomedical baseline models.An improvement of 17%is achieved when the proposed model is checked against similar biomedical text summarizers.展开更多
A worthy text summarization should represent the fundamental content of the document.Recent studies on computerized text summarization tried to present solutions to this challenging problem.Attention models are employ...A worthy text summarization should represent the fundamental content of the document.Recent studies on computerized text summarization tried to present solutions to this challenging problem.Attention models are employed extensively in text summarization process.Classical attention techniques are utilized to acquire the context data in the decoding phase.Nevertheless,without real and efficient feature extraction,the produced summary may diverge from the core topic.In this article,we present an encoder-decoder attention system employing dual attention mechanism.In the dual attention mechanism,the attention algorithm gathers main data from the encoder side.In the dual attentionmodel,the system can capture and producemore rational main content.The merging of the two attention phases produces precise and rational text summaries.The enhanced attention mechanism gives high score to text repetition to increase phrase score.It also captures the relationship between phrases and the title giving them higher score.We assessed our proposed model with or without significance optimization using ablation procedure.Our model with significance optimization achieved the highest performance of 96.7%precision and the least CPU time among other models in both training and sentence extraction.展开更多
The term‘executed linguistics’corresponds to an interdisciplinary domain in which the solutions are identified and provided for real-time language-related problems.The exponential generation of text data on the Inte...The term‘executed linguistics’corresponds to an interdisciplinary domain in which the solutions are identified and provided for real-time language-related problems.The exponential generation of text data on the Internet must be leveraged to gain knowledgeable insights.The extraction of meaningful insights from text data is crucial since it can provide value-added solutions for business organizations and end-users.The Automatic Text Summarization(ATS)process reduces the primary size of the text without losing any basic components of the data.The current study introduces an Applied Linguistics-based English Text Summarization using a Mixed Leader-Based Optimizer with Deep Learning(ALTS-MLODL)model.The presented ALTS-MLODL technique aims to summarize the text documents in the English language.To accomplish this objective,the proposed ALTS-MLODL technique pre-processes the input documents and primarily extracts a set of features.Next,the MLO algorithm is used for the effectual selection of the extracted features.For the text summarization process,the Cascaded Recurrent Neural Network(CRNN)model is exploited whereas the Whale Optimization Algorithm(WOA)is used as a hyperparameter optimizer.The exploitation of the MLO-based feature selection and the WOA-based hyper-parameter tuning enhanced the summarization results.To validate the perfor-mance of the ALTS-MLODL technique,numerous simulation analyses were conducted.The experimental results signify the superiority of the proposed ALTS-MLODL technique over other approaches.展开更多
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t...ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.展开更多
Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of...Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of generating summary texts with desired lengths is a vital task to put the research into practice.To solve this problem,in this paper,we propose a new method to integrate the desired length of the summarized text into the encoder-decoder model for the abstractive text summarization problem.This length parameter is integrated into the encoding phase at each self-attention step and the decoding process by preserving the remaining length for calculating headattention in the generation process and using it as length embeddings added to theword embeddings.We conducted experiments for the proposed model on the two data sets,Cable News Network(CNN)Daily and NEWSROOM,with different desired output lengths.The obtained results show the proposed model’s effectiveness compared with related studies.展开更多
This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin. The re-search that connects the cohesion str...This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin. The re-search that connects the cohesion structure of a text to the derivation of its summary is displayed. A novel model of automatic text summarization is devised,based on the data provided by lexical chains from original texts. Moreover,the construction rules of lexical chains are modified accord-ing to characteristics of the knowledge database in order to be more suitable for Chinese summa-rization. Evaluation results show that high quality indicative summaries are produced from Chi-nese texts.展开更多
Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all questio...Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all question paragraphs are identified, an automatic text segmentation approach analogous to Text'filing is exploited to improve the precision of correlating question paragraphs and answer paragraphs, and finally some "important" sentences are extracted from the generic content and the question-answer pairs to generate a complete summary. Experimental results showed that our approach is highly efficient and improves significantly the coherence of the summary while not compromising informativeness.展开更多
Due to the advanced developments of the Internet and information technologies,a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical dat...Due to the advanced developments of the Internet and information technologies,a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical data,automated multi-document biomedical text summarization becomes an effective and robust approach of accessing the increased amount of technical and medical literature in the biomedical sector through the summarization of multiple source documents by retaining the significantly informative data.So,multi-document biomedical text summarization acts as a vital role to alleviate the issue of accessing precise and updated information.This paper presents a Deep Learning based Attention Long Short Term Memory(DLALSTM)Model for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DL-ALSTM model,chaotic glowworm swarm optimization(CGSO)algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models.展开更多
In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain th...In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods.展开更多
A substantial amount of textual data is present electronically in several languages.These texts directed the gear to information redundancy.It is essential to remove this redundancy and decrease the reading time of th...A substantial amount of textual data is present electronically in several languages.These texts directed the gear to information redundancy.It is essential to remove this redundancy and decrease the reading time of these data.Therefore,we need a computerized text summarization technique to extract relevant information from group of text documents with correlated subjects.This paper proposes a language-independent extractive summarization technique.The proposed technique presents a clustering-based optimization technique.The clustering technique determines the main subjects of the text,while the proposed optimization technique minimizes redundancy,and maximizes significance.Experiments are devised and evaluated using BillSum dataset for the English language,MLSUM for German and Russian and Mawdoo3 for the Arabic language.The experiments are evaluated using ROUGE metrics.The results showed the effectiveness of the proposed technique compared to other language-dependent and languageindependent summarization techniques.Our technique achieved better ROUGE metrics for all the utilized datasets.The technique accomplished an F-measure of 41.9%for Rouge-1,18.7%for Rouge-2,39.4%for Rouge-3,and 16.8%for Rouge-4 on average for all the dataset using all three objectives.Our system also exhibited an improvement of 26.6%,35.5%,34.65%,and 31.54%w.r.t.The recent model contributed in the summarization of BillSum in terms of ROUGE metric evaluation.Our model’s performance is higher than the comparedmodels,especially in themetric results ofROUGE_2which is bi-gram matching.展开更多
This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inve...This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user.展开更多
Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information ...Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder.展开更多
Automatic text summarization(ATS)has achieved impressive performance thanks to recent advances in deep learning(DL)and the availability of large-scale corpora.The key points in ATS are to estimate the salience of info...Automatic text summarization(ATS)has achieved impressive performance thanks to recent advances in deep learning(DL)and the availability of large-scale corpora.The key points in ATS are to estimate the salience of information and to generate coherent results.Recently,a variety of DL-based approaches have been developed for better considering these two aspects.However,there is still a lack of comprehensive literature review for DL-based ATS approaches.The aim of this paper is to comprehensively review significant DL-based approaches that have been proposed in the literature with respect to the notion of generic ATS tasks and provide a walk-through of their evolution.We first give an overview of ATS and DL.The comparisons of the datasets are also given,which are commonly used for model training,validation,and evaluation.Then we summarize single-document summarization approaches.After that,an overview of multi-document summarization approaches is given.We further analyze the performance of the popular ATS models on common datasets.Various popular approaches can be employed for different ATS tasks.Finally,we propose potential research directions in this fast-growing field.We hope this exploration can provide new insights into future research of DL-based ATS.展开更多
The rise of social networking enables the development of multilingual Internet-accessible digital documents in several languages.The digital document needs to be evaluated physically through the Cross-Language Text Su...The rise of social networking enables the development of multilingual Internet-accessible digital documents in several languages.The digital document needs to be evaluated physically through the Cross-Language Text Summarization(CLTS)involved in the disparate and generation of the source documents.Cross-language document processing is involved in the generation of documents from disparate language sources toward targeted documents.The digital documents need to be processed with the contextual semantic data with the decoding scheme.This paper presented a multilingual crosslanguage processing of the documents with the abstractive and summarising of the documents.The proposed model is represented as the Hidden Markov Model LSTM Reinforcement Learning(HMMlstmRL).First,the developed model uses the Hidden Markov model for the computation of keywords in the cross-language words for the clustering.In the second stage,bi-directional long-short-term memory networks are used for key word extraction in the cross-language process.Finally,the proposed HMMlstmRL uses the voting concept in reinforcement learning for the identification and extraction of the keywords.The performance of the proposed HMMlstmRL is 2%better than that of the conventional bi-direction LSTM model.展开更多
Automatic text summarization(ATS)plays a significant role in Natural Language Processing(NLP).Abstractive summarization produces summaries by identifying and compressing the most important information in a document.Ho...Automatic text summarization(ATS)plays a significant role in Natural Language Processing(NLP).Abstractive summarization produces summaries by identifying and compressing the most important information in a document.However,there are only relatively several comprehensively evaluated abstractive summarization models that work well for specific types of reports due to their unstructured and oral language text characteristics.In particular,Chinese complaint reports,generated by urban complainers and collected by government employees,describe existing resident problems in daily life.Meanwhile,the reflected problems are required to respond speedily.Therefore,automatic summarization tasks for these reports have been developed.However,similar to traditional summarization models,the generated summaries still exist problems of informativeness and conciseness.To address these issues and generate suitably informative and less redundant summaries,a topic-based abstractive summarization method is proposed to obtain global and local features.Additionally,a heterogeneous graph of the original document is constructed using word-level and topic-level features.Experiments and analyses on public review datasets(Yelp and Amazon)and our constructed dataset(Chinese complaint reports)show that the proposed framework effectively improves the performance of the abstractive summarization model for Chinese complaint reports.展开更多
With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of train...With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder.展开更多
In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input r...In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder,which may result in inconsistencies between the generated summary and the source text content.The Bidirectional Encoder Representations from Transformers(BERT)improves the performance of many tasks in Natural Language Processing(NLP).Although BERT has a strong capability to encode context,it lacks the fine-grained semantic representation.To solve these two problems,we proposed a semantic supervision method based on Capsule Network.Firstly,we extracted the fine-grained semantic representation of the input and encoded result in BERT by Capsule Network.Secondly,we used the fine-grained semantic representation of the input to supervise the fine-grained semantic representation of the encoded result.Then we evaluated our model on a popular Chinese social media dataset(LCSTS),and the result showed that our model achieved higher ROUGE scores(including R-1,R-2),and our model outperformed baseline systems.Finally,we conducted a comparative study on the stability of the model,and the experimental results showed that our model was more stable.展开更多
This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level an...This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level and micro-level discourse structure to identify important information that can be extracted from dissertation abstracts, and then uses a variable-based framework to integrate and organize extracted information across dissertation abstracts. This framework focuses more on research concepts and their research relationships found in sociology dissertation abstracts and has a hierarchical structure. A taxonomy is constructed to support the summarization process in two ways: (1) helping to identify important concepts and relations expressed in the text, and (2) providing a structure for linking similar concepts in different abstracts. This paper describes the variable-based framework and the summarization process, and then reports the construction of the taxonomy for supporting the summarization process. An example is provided to show how to use the constructed taxonomy to identify important concepts and integrate the concepts extracted from different abstracts.展开更多
Opinion summarization recapitulates the opinions about a common topic automatically.The primary motive of summarization is to preserve the properties of the text and is shortened in a way with no loss in the semantics...Opinion summarization recapitulates the opinions about a common topic automatically.The primary motive of summarization is to preserve the properties of the text and is shortened in a way with no loss in the semantics of the text.The need of automatic summarization efficiently resulted in increased interest among communities of Natural Language Processing and Text Mining.This paper emphasis on building an extractive summarization system combining the features of principal component analysis for dimensionality reduction and bidirectional Recurrent Neural Networks and Long Short-Term Memory(RNN-LSTM)deep learning model for short and exact synopsis using seq2seq model.It presents a paradigm shift with regard to the way extractive summaries are generated.Novel algorithms for word extraction using assertions are proposed.The semantic framework is well-grounded in this research facilitating the correct decision making process after reviewing huge amount of online reviews,considering all its important features into account.The advantages of the proposed solution provides greater computational efficiency,better inferences from social media,data understanding,robustness and handling sparse data.Experiments on the different datasets also outperforms the previous researches and the accuracy is claimed to achieve more than the baselines,showing the efficiency and the novelty in the research paper.The comparisons are done by calculating accuracy with different baselines using Rouge tool.展开更多
文摘Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in real systems are based on graph models,which are characterized by their simplicity and stability.Thus,this paper proposes an improved extractive text summarization algorithm based on both topic and graph models.The methodology of this work consists of two stages.First,the well-known TextRank algorithm is analyzed and its shortcomings are investigated.Then,an improved method is proposed with a new computational model of sentence weights.The experimental results were carried out on standard DUC2004 and DUC2006 datasets and compared to four text summarization methods.Finally,through experiments on the DUC2004 and DUC2006 datasets,our proposed improved graph model algorithm TG-SMR(Topic Graph-Summarizer)is compared to other text summarization systems.The experimental results prove that the proposed TG-SMR algorithm achieves higher ROUGE scores.It is foreseen that the TG-SMR algorithm will open a new horizon that concerns the performance of ROUGE evaluation indicators.
文摘Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informative sentences from biomedical articles is always challenging.This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information.The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model.The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them.The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences.The quality of the framework is assessed via different parameters like information retention,coverage,readability,cohesion,and ROUGE scores in clustering and non-clustering modes.The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption.The configurable settings of combined parameters reduce execution time,enhance memory utilization,and extract relevant information outperforming other biomedical baseline models.An improvement of 17%is achieved when the proposed model is checked against similar biomedical text summarizers.
文摘A worthy text summarization should represent the fundamental content of the document.Recent studies on computerized text summarization tried to present solutions to this challenging problem.Attention models are employed extensively in text summarization process.Classical attention techniques are utilized to acquire the context data in the decoding phase.Nevertheless,without real and efficient feature extraction,the produced summary may diverge from the core topic.In this article,we present an encoder-decoder attention system employing dual attention mechanism.In the dual attention mechanism,the attention algorithm gathers main data from the encoder side.In the dual attentionmodel,the system can capture and producemore rational main content.The merging of the two attention phases produces precise and rational text summaries.The enhanced attention mechanism gives high score to text repetition to increase phrase score.It also captures the relationship between phrases and the title giving them higher score.We assessed our proposed model with or without significance optimization using ablation procedure.Our model with significance optimization achieved the highest performance of 96.7%precision and the least CPU time among other models in both training and sentence extraction.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Ara-biaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR09).
文摘The term‘executed linguistics’corresponds to an interdisciplinary domain in which the solutions are identified and provided for real-time language-related problems.The exponential generation of text data on the Internet must be leveraged to gain knowledgeable insights.The extraction of meaningful insights from text data is crucial since it can provide value-added solutions for business organizations and end-users.The Automatic Text Summarization(ATS)process reduces the primary size of the text without losing any basic components of the data.The current study introduces an Applied Linguistics-based English Text Summarization using a Mixed Leader-Based Optimizer with Deep Learning(ALTS-MLODL)model.The presented ALTS-MLODL technique aims to summarize the text documents in the English language.To accomplish this objective,the proposed ALTS-MLODL technique pre-processes the input documents and primarily extracts a set of features.Next,the MLO algorithm is used for the effectual selection of the extracted features.For the text summarization process,the Cascaded Recurrent Neural Network(CRNN)model is exploited whereas the Whale Optimization Algorithm(WOA)is used as a hyperparameter optimizer.The exploitation of the MLO-based feature selection and the WOA-based hyper-parameter tuning enhanced the summarization results.To validate the perfor-mance of the ALTS-MLODL technique,numerous simulation analyses were conducted.The experimental results signify the superiority of the proposed ALTS-MLODL technique over other approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4210118DSR33The authors are thankful to the Deanship of ScientificResearch atNajranUniversity for funding thiswork under theResearch Groups Funding Program Grant Code(NU/RG/SERC/11/7).
文摘ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.
基金funded by Vietnam National Foundation for Science and Technology Development(NAFOSTED)under Grant Number 102.05-2020.26。
文摘Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of generating summary texts with desired lengths is a vital task to put the research into practice.To solve this problem,in this paper,we propose a new method to integrate the desired length of the summarized text into the encoder-decoder model for the abstractive text summarization problem.This length parameter is integrated into the encoding phase at each self-attention step and the decoding process by preserving the remaining length for calculating headattention in the generation process and using it as length embeddings added to theword embeddings.We conducted experiments for the proposed model on the two data sets,Cable News Network(CNN)Daily and NEWSROOM,with different desired output lengths.The obtained results show the proposed model’s effectiveness compared with related studies.
基金the Key Project of National Natural Sci-ence Foundation of China (No.60435020)the High Technology Research and Development Programme of China (No.2002AA117010-09).
文摘This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin. The re-search that connects the cohesion structure of a text to the derivation of its summary is displayed. A novel model of automatic text summarization is devised,based on the data provided by lexical chains from original texts. Moreover,the construction rules of lexical chains are modified accord-ing to characteristics of the knowledge database in order to be more suitable for Chinese summa-rization. Evaluation results show that high quality indicative summaries are produced from Chi-nese texts.
基金Project (No. 2002AA119050) supported by the National Hi-TechResearch and Development Program (863) of China
文摘Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all question paragraphs are identified, an automatic text segmentation approach analogous to Text'filing is exploited to improve the precision of correlating question paragraphs and answer paragraphs, and finally some "important" sentences are extracted from the generic content and the question-answer pairs to generate a complete summary. Experimental results showed that our approach is highly efficient and improves significantly the coherence of the summary while not compromising informativeness.
基金This work is funded byDeanship of Scientific Research atKingKhalid University under Grant Number(RGP 1/279/42).www.kku.edu.sa.
文摘Due to the advanced developments of the Internet and information technologies,a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical data,automated multi-document biomedical text summarization becomes an effective and robust approach of accessing the increased amount of technical and medical literature in the biomedical sector through the summarization of multiple source documents by retaining the significantly informative data.So,multi-document biomedical text summarization acts as a vital role to alleviate the issue of accessing precise and updated information.This paper presents a Deep Learning based Attention Long Short Term Memory(DLALSTM)Model for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DL-ALSTM model,chaotic glowworm swarm optimization(CGSO)algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models.
文摘In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods.
基金This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R113)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘A substantial amount of textual data is present electronically in several languages.These texts directed the gear to information redundancy.It is essential to remove this redundancy and decrease the reading time of these data.Therefore,we need a computerized text summarization technique to extract relevant information from group of text documents with correlated subjects.This paper proposes a language-independent extractive summarization technique.The proposed technique presents a clustering-based optimization technique.The clustering technique determines the main subjects of the text,while the proposed optimization technique minimizes redundancy,and maximizes significance.Experiments are devised and evaluated using BillSum dataset for the English language,MLSUM for German and Russian and Mawdoo3 for the Arabic language.The experiments are evaluated using ROUGE metrics.The results showed the effectiveness of the proposed technique compared to other language-dependent and languageindependent summarization techniques.Our technique achieved better ROUGE metrics for all the utilized datasets.The technique accomplished an F-measure of 41.9%for Rouge-1,18.7%for Rouge-2,39.4%for Rouge-3,and 16.8%for Rouge-4 on average for all the dataset using all three objectives.Our system also exhibited an improvement of 26.6%,35.5%,34.65%,and 31.54%w.r.t.The recent model contributed in the summarization of BillSum in terms of ROUGE metric evaluation.Our model’s performance is higher than the comparedmodels,especially in themetric results ofROUGE_2which is bi-gram matching.
文摘This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user.
文摘Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder.
基金supported by the National Key Research and Development Program of China under Grant No.2016YFB1000902the National Natural Science Foundation of China under Grant Nos.61232015,61472412,and 61621003.
文摘Automatic text summarization(ATS)has achieved impressive performance thanks to recent advances in deep learning(DL)and the availability of large-scale corpora.The key points in ATS are to estimate the salience of information and to generate coherent results.Recently,a variety of DL-based approaches have been developed for better considering these two aspects.However,there is still a lack of comprehensive literature review for DL-based ATS approaches.The aim of this paper is to comprehensively review significant DL-based approaches that have been proposed in the literature with respect to the notion of generic ATS tasks and provide a walk-through of their evolution.We first give an overview of ATS and DL.The comparisons of the datasets are also given,which are commonly used for model training,validation,and evaluation.Then we summarize single-document summarization approaches.After that,an overview of multi-document summarization approaches is given.We further analyze the performance of the popular ATS models on common datasets.Various popular approaches can be employed for different ATS tasks.Finally,we propose potential research directions in this fast-growing field.We hope this exploration can provide new insights into future research of DL-based ATS.
文摘The rise of social networking enables the development of multilingual Internet-accessible digital documents in several languages.The digital document needs to be evaluated physically through the Cross-Language Text Summarization(CLTS)involved in the disparate and generation of the source documents.Cross-language document processing is involved in the generation of documents from disparate language sources toward targeted documents.The digital documents need to be processed with the contextual semantic data with the decoding scheme.This paper presented a multilingual crosslanguage processing of the documents with the abstractive and summarising of the documents.The proposed model is represented as the Hidden Markov Model LSTM Reinforcement Learning(HMMlstmRL).First,the developed model uses the Hidden Markov model for the computation of keywords in the cross-language words for the clustering.In the second stage,bi-directional long-short-term memory networks are used for key word extraction in the cross-language process.Finally,the proposed HMMlstmRL uses the voting concept in reinforcement learning for the identification and extraction of the keywords.The performance of the proposed HMMlstmRL is 2%better than that of the conventional bi-direction LSTM model.
基金supported byNationalNatural Science Foundation of China(52274205)and Project of Education Department of Liaoning Province(LJKZ0338).
文摘Automatic text summarization(ATS)plays a significant role in Natural Language Processing(NLP).Abstractive summarization produces summaries by identifying and compressing the most important information in a document.However,there are only relatively several comprehensively evaluated abstractive summarization models that work well for specific types of reports due to their unstructured and oral language text characteristics.In particular,Chinese complaint reports,generated by urban complainers and collected by government employees,describe existing resident problems in daily life.Meanwhile,the reflected problems are required to respond speedily.Therefore,automatic summarization tasks for these reports have been developed.However,similar to traditional summarization models,the generated summaries still exist problems of informativeness and conciseness.To address these issues and generate suitably informative and less redundant summaries,a topic-based abstractive summarization method is proposed to obtain global and local features.Additionally,a heterogeneous graph of the original document is constructed using word-level and topic-level features.Experiments and analyses on public review datasets(Yelp and Amazon)and our constructed dataset(Chinese complaint reports)show that the proposed framework effectively improves the performance of the abstractive summarization model for Chinese complaint reports.
基金the Beijing Municipal Science and Technology Program(Z231100001323004)。
文摘With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder.
基金This work was partially supported by the National Natural Science Foundation of China(Grant No.61502082)the National Key R&D Program of China(Grant No.2018YFA0306703).
文摘In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder,which may result in inconsistencies between the generated summary and the source text content.The Bidirectional Encoder Representations from Transformers(BERT)improves the performance of many tasks in Natural Language Processing(NLP).Although BERT has a strong capability to encode context,it lacks the fine-grained semantic representation.To solve these two problems,we proposed a semantic supervision method based on Capsule Network.Firstly,we extracted the fine-grained semantic representation of the input and encoded result in BERT by Capsule Network.Secondly,we used the fine-grained semantic representation of the input to supervise the fine-grained semantic representation of the encoded result.Then we evaluated our model on a popular Chinese social media dataset(LCSTS),and the result showed that our model achieved higher ROUGE scores(including R-1,R-2),and our model outperformed baseline systems.Finally,we conducted a comparative study on the stability of the model,and the experimental results showed that our model was more stable.
文摘This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level and micro-level discourse structure to identify important information that can be extracted from dissertation abstracts, and then uses a variable-based framework to integrate and organize extracted information across dissertation abstracts. This framework focuses more on research concepts and their research relationships found in sociology dissertation abstracts and has a hierarchical structure. A taxonomy is constructed to support the summarization process in two ways: (1) helping to identify important concepts and relations expressed in the text, and (2) providing a structure for linking similar concepts in different abstracts. This paper describes the variable-based framework and the summarization process, and then reports the construction of the taxonomy for supporting the summarization process. An example is provided to show how to use the constructed taxonomy to identify important concepts and integrate the concepts extracted from different abstracts.
基金to the Deanship of Scientific Research at King Faisal University for its financial support,with reference to the research grant number as 216082.
文摘Opinion summarization recapitulates the opinions about a common topic automatically.The primary motive of summarization is to preserve the properties of the text and is shortened in a way with no loss in the semantics of the text.The need of automatic summarization efficiently resulted in increased interest among communities of Natural Language Processing and Text Mining.This paper emphasis on building an extractive summarization system combining the features of principal component analysis for dimensionality reduction and bidirectional Recurrent Neural Networks and Long Short-Term Memory(RNN-LSTM)deep learning model for short and exact synopsis using seq2seq model.It presents a paradigm shift with regard to the way extractive summaries are generated.Novel algorithms for word extraction using assertions are proposed.The semantic framework is well-grounded in this research facilitating the correct decision making process after reviewing huge amount of online reviews,considering all its important features into account.The advantages of the proposed solution provides greater computational efficiency,better inferences from social media,data understanding,robustness and handling sparse data.Experiments on the different datasets also outperforms the previous researches and the accuracy is claimed to achieve more than the baselines,showing the efficiency and the novelty in the research paper.The comparisons are done by calculating accuracy with different baselines using Rouge tool.