With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification...With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.展开更多
Due to the availability of a huge number of electronic text documents from a variety of sources representing unstructured and semi-structured information,the document classication task becomes an interesting area for ...Due to the availability of a huge number of electronic text documents from a variety of sources representing unstructured and semi-structured information,the document classication task becomes an interesting area for controlling data behavior.This paper presents a document classication multimodal for categorizing textual semi-structured and unstructured documents.The multimodal implements several individual deep learning models such as Deep Neural Networks(DNN),Recurrent Convolutional Neural Networks(RCNN)and Bidirectional-LSTM(Bi-LSTM).The Stacked Ensemble based meta-model technique is used to combine the results of the individual classiers to produce better results,compared to those reached by any of the above mentioned models individually.A series of textual preprocessing steps are executed to normalize the input corpus followed by text vectorization techniques.These techniques include using Term Frequency Inverse Term Frequency(TFIDF)or Continuous Bag of Word(CBOW)to convert text data into the corresponding suitable numeric form acceptable to be manipulated by deep learning models.Moreover,this proposed model is validated using a dataset collected from several spaces with a huge number of documents in every class.In addition,the experimental results prove that the proposed model has achieved effective performance.Besides,upon investigating the PDF Documents classication,the proposed model has achieved accuracy up to 0.9045 and 0.959 for the TFIDF and CBOW features,respectively.Moreover,concerning the JSON Documents classication,the proposed model has achieved accuracy up to 0.914 and 0.956 for the TFIDF and CBOW features,respectively.Furthermore,as for the XML Documents classication,the proposed model has achieved accuracy values up to 0.92 and 0.959 for the TFIDF and CBOW features,respectively.展开更多
Purpose:Ever increasing penetration of the Internet in our lives has led to an enormous amount of multimedia content generation on the internet.Textual data contributes a major share towards data generated on the worl...Purpose:Ever increasing penetration of the Internet in our lives has led to an enormous amount of multimedia content generation on the internet.Textual data contributes a major share towards data generated on the world wide web.Understanding people’s sentiment is an important aspect of natural language processing,but this opinion can be biased and incorrect,if people use sarcasm while commenting,posting status updates or reviewing any product or a movie.Thus,it is of utmost importance to detect sarcasm correctly and make a correct prediction about the people’s intentions.Design/methodology/approach:This study tries to evaluate various machine learning models along with standard and hybrid deep learning models across various standardized datasets.We have performed vectorization of text using word embedding techniques.This has been done to convert the textual data into vectors for analytical purposes.We have used three standardized datasets available in public domain and used three word embeddings i.e Word2Vec,GloVe and fastText to validate the hypothesis.Findings:The results were analyzed and conclusions are drawn.The key finding is:the hybrid models that include Bidirectional LongTerm Short Memory(Bi-LSTM)and Convolutional Neural Network(CNN)outperform others conventional machine learning as well as deep learning models across all the datasets considered in this study,making our hypothesis valid.Research limitations:Using the data from different sources and customizing the models according to each dataset,slightly decreases the usability of the technique.But,overall this methodology provides effective measures to identify the presence of sarcasm with a minimum average accuracy of 80%or above for one dataset and better than the current baseline results for the other datasets.Practical implications:The results provide solid insights for the system developers to integrate this model into real-time analysis of any review or comment posted in the public domain.This study has various other practical implications for businesses that depend on user ratings and public opinions.This study also provides a launching platform for various researchers to work on the problem of sarcasm identification in textual data.Originality/value:This is a first of its kind study,to provide us the difference between conventional and the hybrid methods of prediction of sarcasm in textual data.The study also provides possible indicators that hybrid models are better when applied to textual data for analysis of sarcasm.展开更多
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
基金Supported by the Sichuan Science and Technology Program (2021YFQ0003).
文摘With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.
文摘Due to the availability of a huge number of electronic text documents from a variety of sources representing unstructured and semi-structured information,the document classication task becomes an interesting area for controlling data behavior.This paper presents a document classication multimodal for categorizing textual semi-structured and unstructured documents.The multimodal implements several individual deep learning models such as Deep Neural Networks(DNN),Recurrent Convolutional Neural Networks(RCNN)and Bidirectional-LSTM(Bi-LSTM).The Stacked Ensemble based meta-model technique is used to combine the results of the individual classiers to produce better results,compared to those reached by any of the above mentioned models individually.A series of textual preprocessing steps are executed to normalize the input corpus followed by text vectorization techniques.These techniques include using Term Frequency Inverse Term Frequency(TFIDF)or Continuous Bag of Word(CBOW)to convert text data into the corresponding suitable numeric form acceptable to be manipulated by deep learning models.Moreover,this proposed model is validated using a dataset collected from several spaces with a huge number of documents in every class.In addition,the experimental results prove that the proposed model has achieved effective performance.Besides,upon investigating the PDF Documents classication,the proposed model has achieved accuracy up to 0.9045 and 0.959 for the TFIDF and CBOW features,respectively.Moreover,concerning the JSON Documents classication,the proposed model has achieved accuracy up to 0.914 and 0.956 for the TFIDF and CBOW features,respectively.Furthermore,as for the XML Documents classication,the proposed model has achieved accuracy values up to 0.92 and 0.959 for the TFIDF and CBOW features,respectively.
文摘Purpose:Ever increasing penetration of the Internet in our lives has led to an enormous amount of multimedia content generation on the internet.Textual data contributes a major share towards data generated on the world wide web.Understanding people’s sentiment is an important aspect of natural language processing,but this opinion can be biased and incorrect,if people use sarcasm while commenting,posting status updates or reviewing any product or a movie.Thus,it is of utmost importance to detect sarcasm correctly and make a correct prediction about the people’s intentions.Design/methodology/approach:This study tries to evaluate various machine learning models along with standard and hybrid deep learning models across various standardized datasets.We have performed vectorization of text using word embedding techniques.This has been done to convert the textual data into vectors for analytical purposes.We have used three standardized datasets available in public domain and used three word embeddings i.e Word2Vec,GloVe and fastText to validate the hypothesis.Findings:The results were analyzed and conclusions are drawn.The key finding is:the hybrid models that include Bidirectional LongTerm Short Memory(Bi-LSTM)and Convolutional Neural Network(CNN)outperform others conventional machine learning as well as deep learning models across all the datasets considered in this study,making our hypothesis valid.Research limitations:Using the data from different sources and customizing the models according to each dataset,slightly decreases the usability of the technique.But,overall this methodology provides effective measures to identify the presence of sarcasm with a minimum average accuracy of 80%or above for one dataset and better than the current baseline results for the other datasets.Practical implications:The results provide solid insights for the system developers to integrate this model into real-time analysis of any review or comment posted in the public domain.This study has various other practical implications for businesses that depend on user ratings and public opinions.This study also provides a launching platform for various researchers to work on the problem of sarcasm identification in textual data.Originality/value:This is a first of its kind study,to provide us the difference between conventional and the hybrid methods of prediction of sarcasm in textual data.The study also provides possible indicators that hybrid models are better when applied to textual data for analysis of sarcasm.
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.