To discover personalized document structure with the consideration of user preferences,user preferences were captured by limited amount of instance level constraints and given as interested and uninterested key terms....To discover personalized document structure with the consideration of user preferences,user preferences were captured by limited amount of instance level constraints and given as interested and uninterested key terms.Develop a semi-supervised document clustering approach based on the latent Dirichlet allocation(LDA)model,namely,pLDA,guided by the user provided key terms.Propose a generalized Polya urn(GPU) model to integrate the user preferences to the document clustering process.A Gibbs sampler was investigated to infer the document collection structure.Experiments on real datasets were taken to explore the performance of pLDA.The results demonstrate that the pLDA approach is effective.展开更多
The growth of cloud in modern technology is drastic by provisioning services to various industries where data security is considered to be common issue that influences the intrusion detection system(IDS).IDS are consi...The growth of cloud in modern technology is drastic by provisioning services to various industries where data security is considered to be common issue that influences the intrusion detection system(IDS).IDS are considered as an essential factor to fulfill security requirements.Recently,there are diverse Machine Learning(ML)approaches that are used for modeling effectual IDS.Most IDS are based on ML techniques and categorized as supervised and unsupervised.However,IDS with supervised learning is based on labeled data.This is considered as a common drawback and it fails to identify the attack patterns.Similarly,unsupervised learning fails to provide satisfactory outcomes.Therefore,this work concentrates on semi-supervised learning model known as Fuzzy based semi-supervised approach through Latent Dirichlet Allocation(F-LDA)for intrusion detection in cloud system.This helps to resolve the aforementioned challenges.Initially,LDA gives better generalization ability for training the labeled data.Similarly,to handle the unlabelled data,Fuzzy model has been adopted for analyzing the dataset.Here,preprocessing has been carried out to eliminate data redundancy over network dataset.In order to validate the efficiency of F-LDA towards ID,this model is tested under NSL-KDD cup dataset is a common traffic dataset.Simulation is done inMATLAB environment and gives better accuracy while comparing with benchmark standard dataset.The proposed F-LDAgives better accuracy and promising outcomes than the prevailing approaches.展开更多
Government policy-group integration and policy-chain inference are significant to the execution of strategies in current Chinese society.Specifically,the coordination of hierarchical policies implemented among governm...Government policy-group integration and policy-chain inference are significant to the execution of strategies in current Chinese society.Specifically,the coordination of hierarchical policies implemented among government departments is one of the key challenges to rural revitalization.In recent years,various well-established quantitative methods have been proposed to evaluate policy coordination,but the majority of these relied on manual analysis,which can lead to subjective results.Thus,in this paper,a novel approach called“policy knowledge graph for the coordination among the government departments”(PG-CODE)is proposed,which incorporates topic modeling into policy knowledge graphs.Similar to a knowledge graph,a policy knowledge graph uses a graph-structured data model to integrate policy discourse.With latent Dirichlet allocation embedding,a policy knowledge graph could capture the underlying topics of the policies.Furthermore,coordination strength and topic diffusion among hierarchical departments could be inferred from the PG-CODE,as it can provide a better representation of coordination within the policy space.We implemented and evaluated the PG-CODE in the field of rural innovation and entrepreneurship policy,and the results effectively demonstrate improved coordination among departments.展开更多
The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent...The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent Dirichlet Allocation(LDA)to obtain the distribution of topic words in the current time window.Second,the word2 vec word vector is used as auxiliary information to determine the sentiment polarity and obtain the sentiment polarity distribution of the current topic.Finally,the sentiment polarity changes of the topics in the previous and next time window are mapped to the sentiment factors,and the distribution of topic words in the next time window is controlled through them.The experimental results show that the PSOLDA model decreases the probability distribution by 0.1601,while Online Twitter LDA only increases by 0.0699.The topic evolution method that integrates the sentimental information of topic words proposed in this paper is better than the traditional model.展开更多
Latent Dirichlet allocation(LDA)is a topic model widely used for discovering hidden semantics in massive text corpora.Collapsed Gibbs sampling(CGS),as a widely-used algorithm for learning the parameters of LDA,has the...Latent Dirichlet allocation(LDA)is a topic model widely used for discovering hidden semantics in massive text corpora.Collapsed Gibbs sampling(CGS),as a widely-used algorithm for learning the parameters of LDA,has the risk of privacy leakage.Specifically,word count statistics and updates of latent topics in CGS,which are essential for parameter estimation,could be employed by adversaries to conduct effective membership inference attacks(MIAs).Till now,there are two kinds of methods exploited in CGS to defend against MIAs:adding noise to word count statistics and utilizing inherent privacy.These two kinds of methods have their respective limitations.Noise sampled from the Laplacian distribution sometimes produces negative word count statistics,which render terrible parameter estimation in CGS.Utilizing inherent privacy could only provide weak guaranteed privacy when defending against MIAs.It is promising to propose an effective framework to obtain accurate parameter estimations with guaranteed differential privacy.The key issue of obtaining accurate parameter estimations when introducing differential privacy in CGS is making good use of the privacy budget such that a precise noise scale is derived.It is the first time that R′enyi differential privacy(RDP)has been introduced into CGS and we propose RDP-LDA,an effective framework for analyzing the privacy loss of any differentially private CGS.RDP-LDA could be used to derive a tighter upper bound of privacy loss than the overestimated results of existing differentially private CGS obtained byε-DP.In RDP-LDA,we propose a novel truncated-Gaussian mechanism that keeps word count statistics non-negative.And we propose distribution perturbation which could provide more rigorous guaranteed privacy than utilizing inherent privacy.Experiments validate that our proposed methods produce more accurate parameter estimation under the JS-divergence metric and obtain lower precision and recall when defending against MIAs.展开更多
This study undertakes a thorough analysis of the sentiment within the r/Corona-virus subreddit community regarding COVID-19 vaccines on Reddit. We meticulously collected and processed 34,768 comments, spanning from No...This study undertakes a thorough analysis of the sentiment within the r/Corona-virus subreddit community regarding COVID-19 vaccines on Reddit. We meticulously collected and processed 34,768 comments, spanning from November 20, 2020, to January 17, 2021, using sentiment calculation methods such as TextBlob and Twitter-RoBERTa-Base-sentiment to categorize comments into positive, negative, or neutral sentiments. The methodology involved the use of Count Vectorizer as a vectorization technique and the implementation of advanced ensemble algorithms like XGBoost and Random Forest, achieving an accuracy of approximately 80%. Furthermore, through the Dirichlet latent allocation, we identified 23 distinct reasons for vaccine distrust among negative comments. These findings are crucial for understanding the community’s attitudes towards vaccination and can guide targeted public health messaging. Our study not only provides insights into public opinion during a critical health crisis, but also demonstrates the effectiveness of combining natural language processing tools and ensemble algorithms in sentiment analysis.展开更多
Due to the slow processing speed of text topic clustering in stand-alone architecture under the background of big data,this paper takes news text as the research object and proposes LDA text topic clustering algorithm...Due to the slow processing speed of text topic clustering in stand-alone architecture under the background of big data,this paper takes news text as the research object and proposes LDA text topic clustering algorithm based on Spark big data platform.Since the TF-IDF(term frequency-inverse document frequency)algorithm under Spark is irreversible to word mapping,the mapped words indexes cannot be traced back to the original words.In this paper,an optimized method is proposed that TF-IDF under Spark to ensure the text words can be restored.Firstly,the text feature is extracted by the TF-IDF algorithm combined CountVectorizer proposed in this paper,and then the features are inputted to the LDA(Latent Dirichlet Allocation)topic model for training.Finally,the text topic clustering is obtained.Experimental results show that for large data samples,the processing speed of LDA topic model clustering has been improved based Spark.At the same time,compared with the LDA topic model based on word frequency input,the model proposed in this paper has a reduction of perplexity.展开更多
As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability throug...As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability through unsupervised feature selection.Theoretical analysis shows that the discrimination capability of a topic is limited by the discrimination capability of its representative words.The discrimination capability of a word is approximated by the Information Gain of the word for topics,which is used to distinguish between "general word" and "special word" in LDA topics.Therefore,we add a constraint to the LDA objective function to let the "general words" only happen in "general topics" other than "special topics".Then a heuristic algorithm is presented to get the solution.Experiments show that this method can not only improve the information gain of topics,but also make the topics easier to understand by human.展开更多
Phishing is the act of attempting to steal a user’s financial and personal information, such as credit card numbers and passwords by pretending to be a trustworthy participant, during online communication. Attackers ...Phishing is the act of attempting to steal a user’s financial and personal information, such as credit card numbers and passwords by pretending to be a trustworthy participant, during online communication. Attackers may direct the users to a fake website that could seem legitimate, and then gather useful and confidential information using that site. In order to protect users from Social Engineering techniques such as phishing, various measures have been developed, including improvement of Technical Security. In this paper, we propose a new technique, namely, “A Prediction Model for the Detection of Phishing e-mails using Topic Modelling, Named Entity Recognition and Image Processing”. The features extracted are Topic Modelling features, Named Entity features and Structural features. A multi-classifier prediction model is used to detect the phishing mails. Experimental results show that the multi-classification technique outperforms the single-classifier-based prediction techniques. The resultant accuracy of the detection of phishing e-mail is 99% with the highest False Positive Rate being 2.1%.展开更多
With the progress and development of computer technology,applying machine learning methods to cancer research has become an important research field.To analyze the most recent research status and trends,main research ...With the progress and development of computer technology,applying machine learning methods to cancer research has become an important research field.To analyze the most recent research status and trends,main research topics,topic evolutions,research collaborations,and potential directions of this research field,this study conducts a bibliometric analysis on 6206 research articles worldwide collected from PubMed between 2011 and 2021 concerning cancer research using machine learning methods.Python is used as a tool for bibliometric analysis,Gephi is used for social network analysis,and the Latent Dirichlet Allocation model is used for topic modeling.The trend analysis of articles not only reflects the innovative research at the intersection of machine learning and cancer but also demonstrates its vigorous development and increasing impacts.In terms of journals,Nature Communications is the most influential journal and Scientific Reports is the most prolific one.The United States and Harvard University have contributed the most to cancer research using machine learning methods.As for the research topic,“Support Vector Machine,”“classification,”and“deep learning”have been the core focuses of the research field.Findings are helpful for scholars and related practitioners to better understand the development status and trends of cancer research using machine learning methods,as well as to have a deeper understanding of research hotspots.展开更多
Twitter can supply useful information on infrastructure impacts to the emergency managers during major disasters,but it is time consuming to filter through many irrelevant tweets.Previous studies have identified the t...Twitter can supply useful information on infrastructure impacts to the emergency managers during major disasters,but it is time consuming to filter through many irrelevant tweets.Previous studies have identified the types of messages that can be found on social media during disasters,but few solutions have been proposed to efficiently extract useful ones.We present a framework that can be applied in a timely manner to provide disaster impact information sourced from social media.The framework is tested on a well-studied and data-rich case of Hurricane Harvey.The procedures consist of filtering the raw Twitter data based on keywords,location,and tweet attributes,and then applying the latent Dirichlet allocation(LDA) to separate the tweets from the disaster affected area into categories(topics) useful to emergency managers.The LDA revealed that out of 24 topics found in the data,nine were directly related to disaster impacts-for example,outages,closures,flooded roads,and damaged infrastructure.Features such as frequent hashtags,mentions,URLs,and useful images were then extracted and analyzed.The relevant tweets,along with useful images,were correlated at the county level with flood depth,distributed disaster aid(damage),and population density.Significant correlations were found between the nine relevant topics and population density but not flood depth and damage,suggesting that more research into the suitability of social media data for disaster impacts modeling is needed.The results from this study provide baseline information for such efforts in the future.展开更多
Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these...Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.展开更多
The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and ...The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and tracking of convection text data.However,the current topic recognition model tends to be based on a fixed number of topics K and lacks multi-granularity analysis of subject knowledge.Therefore,it is impossible to deeply perceive the dynamic change of the topic in the time series.By introducing a novel approach on the basis of Infinite Latent Dirichlet allocation model,a topic feature lattice under the dynamic topic number is constructed.In the model,documents,topics and vocabularies are jointly modeled to generate two probability distribution matrices:Documentstopics and topic-feature words.Afterwards,the association intensity is computed between the topic and its feature vocabulary to establish the topic formal context matrix.Finally,the topic feature is induced according to the formal concept analysis(FCA)theory.The topic feature lattice under dynamic topic number(TFL DTN)model is validated on the real dataset by comparing with the mainstream methods.Experiments show that this model is more in line with actual needs,and achieves better results in semi-automatic modeling of topic visualization analysis.展开更多
基金National Natural Science Foundations of China(Nos.61262006,61462011,61202089)the Major Applied Basic Research Program of Guizhou Province Project,China(No.JZ20142001)+2 种基金the Science and Technology Foundation of Guizhou Province Project,China(No.LH20147636)the National Research Foundation for the Doctoral Program of Higher Education of China(No.20125201120006)the Graduate Innovated Foundations of Guizhou University Project,China(No.2015012)
文摘To discover personalized document structure with the consideration of user preferences,user preferences were captured by limited amount of instance level constraints and given as interested and uninterested key terms.Develop a semi-supervised document clustering approach based on the latent Dirichlet allocation(LDA)model,namely,pLDA,guided by the user provided key terms.Propose a generalized Polya urn(GPU) model to integrate the user preferences to the document clustering process.A Gibbs sampler was investigated to infer the document collection structure.Experiments on real datasets were taken to explore the performance of pLDA.The results demonstrate that the pLDA approach is effective.
文摘The growth of cloud in modern technology is drastic by provisioning services to various industries where data security is considered to be common issue that influences the intrusion detection system(IDS).IDS are considered as an essential factor to fulfill security requirements.Recently,there are diverse Machine Learning(ML)approaches that are used for modeling effectual IDS.Most IDS are based on ML techniques and categorized as supervised and unsupervised.However,IDS with supervised learning is based on labeled data.This is considered as a common drawback and it fails to identify the attack patterns.Similarly,unsupervised learning fails to provide satisfactory outcomes.Therefore,this work concentrates on semi-supervised learning model known as Fuzzy based semi-supervised approach through Latent Dirichlet Allocation(F-LDA)for intrusion detection in cloud system.This helps to resolve the aforementioned challenges.Initially,LDA gives better generalization ability for training the labeled data.Similarly,to handle the unlabelled data,Fuzzy model has been adopted for analyzing the dataset.Here,preprocessing has been carried out to eliminate data redundancy over network dataset.In order to validate the efficiency of F-LDA towards ID,this model is tested under NSL-KDD cup dataset is a common traffic dataset.Simulation is done inMATLAB environment and gives better accuracy while comparing with benchmark standard dataset.The proposed F-LDAgives better accuracy and promising outcomes than the prevailing approaches.
基金supported by the National Social Science Fund of China(No.20BGL231)the Natural Science Foundation of Hubei Province(No.2018CFB380)。
文摘Government policy-group integration and policy-chain inference are significant to the execution of strategies in current Chinese society.Specifically,the coordination of hierarchical policies implemented among government departments is one of the key challenges to rural revitalization.In recent years,various well-established quantitative methods have been proposed to evaluate policy coordination,but the majority of these relied on manual analysis,which can lead to subjective results.Thus,in this paper,a novel approach called“policy knowledge graph for the coordination among the government departments”(PG-CODE)is proposed,which incorporates topic modeling into policy knowledge graphs.Similar to a knowledge graph,a policy knowledge graph uses a graph-structured data model to integrate policy discourse.With latent Dirichlet allocation embedding,a policy knowledge graph could capture the underlying topics of the policies.Furthermore,coordination strength and topic diffusion among hierarchical departments could be inferred from the PG-CODE,as it can provide a better representation of coordination within the policy space.We implemented and evaluated the PG-CODE in the field of rural innovation and entrepreneurship policy,and the results effectively demonstrate improved coordination among departments.
基金Supported by the Opening Project of Shanghai Key Laboratory of Integrated Administration Technologies for Information Security(AGK2019004)Songjiang District Science and Technology Research Project(19SJKJGG83)National Natural Science Foundation of China(61802251)。
文摘The Product Sensitive Online Dirichlet Allocation model(PSOLDA)proposed in this paper mainly uses the sentiment polarity of topic words in the review text to improve the accuracy of topic evolution.First,we use Latent Dirichlet Allocation(LDA)to obtain the distribution of topic words in the current time window.Second,the word2 vec word vector is used as auxiliary information to determine the sentiment polarity and obtain the sentiment polarity distribution of the current topic.Finally,the sentiment polarity changes of the topics in the previous and next time window are mapped to the sentiment factors,and the distribution of topic words in the next time window is controlled through them.The experimental results show that the PSOLDA model decreases the probability distribution by 0.1601,while Online Twitter LDA only increases by 0.0699.The topic evolution method that integrates the sentimental information of topic words proposed in this paper is better than the traditional model.
基金the National Natural Science Foundation of China under Grant Nos.62072460,62076245,and 62172424the Beijing Natural Science Foundation under Grant No.4212022.
文摘Latent Dirichlet allocation(LDA)is a topic model widely used for discovering hidden semantics in massive text corpora.Collapsed Gibbs sampling(CGS),as a widely-used algorithm for learning the parameters of LDA,has the risk of privacy leakage.Specifically,word count statistics and updates of latent topics in CGS,which are essential for parameter estimation,could be employed by adversaries to conduct effective membership inference attacks(MIAs).Till now,there are two kinds of methods exploited in CGS to defend against MIAs:adding noise to word count statistics and utilizing inherent privacy.These two kinds of methods have their respective limitations.Noise sampled from the Laplacian distribution sometimes produces negative word count statistics,which render terrible parameter estimation in CGS.Utilizing inherent privacy could only provide weak guaranteed privacy when defending against MIAs.It is promising to propose an effective framework to obtain accurate parameter estimations with guaranteed differential privacy.The key issue of obtaining accurate parameter estimations when introducing differential privacy in CGS is making good use of the privacy budget such that a precise noise scale is derived.It is the first time that R′enyi differential privacy(RDP)has been introduced into CGS and we propose RDP-LDA,an effective framework for analyzing the privacy loss of any differentially private CGS.RDP-LDA could be used to derive a tighter upper bound of privacy loss than the overestimated results of existing differentially private CGS obtained byε-DP.In RDP-LDA,we propose a novel truncated-Gaussian mechanism that keeps word count statistics non-negative.And we propose distribution perturbation which could provide more rigorous guaranteed privacy than utilizing inherent privacy.Experiments validate that our proposed methods produce more accurate parameter estimation under the JS-divergence metric and obtain lower precision and recall when defending against MIAs.
文摘This study undertakes a thorough analysis of the sentiment within the r/Corona-virus subreddit community regarding COVID-19 vaccines on Reddit. We meticulously collected and processed 34,768 comments, spanning from November 20, 2020, to January 17, 2021, using sentiment calculation methods such as TextBlob and Twitter-RoBERTa-Base-sentiment to categorize comments into positive, negative, or neutral sentiments. The methodology involved the use of Count Vectorizer as a vectorization technique and the implementation of advanced ensemble algorithms like XGBoost and Random Forest, achieving an accuracy of approximately 80%. Furthermore, through the Dirichlet latent allocation, we identified 23 distinct reasons for vaccine distrust among negative comments. These findings are crucial for understanding the community’s attitudes towards vaccination and can guide targeted public health messaging. Our study not only provides insights into public opinion during a critical health crisis, but also demonstrates the effectiveness of combining natural language processing tools and ensemble algorithms in sentiment analysis.
基金This work is supported by the Science Research Projects of Hunan Provincial Education Department(Nos.18A174,18C0262)the National Natural Science Foundation of China(No.61772561)+2 种基金the Key Research&Development Plan of Hunan Province(Nos.2018NK2012,2019SK2022)the Degree&Postgraduate Education Reform Project of Hunan Province(No.209)the Postgraduate Education and Teaching Reform Project of Central South Forestry University(No.2019JG013).
文摘Due to the slow processing speed of text topic clustering in stand-alone architecture under the background of big data,this paper takes news text as the research object and proposes LDA text topic clustering algorithm based on Spark big data platform.Since the TF-IDF(term frequency-inverse document frequency)algorithm under Spark is irreversible to word mapping,the mapped words indexes cannot be traced back to the original words.In this paper,an optimized method is proposed that TF-IDF under Spark to ensure the text words can be restored.Firstly,the text feature is extracted by the TF-IDF algorithm combined CountVectorizer proposed in this paper,and then the features are inputted to the LDA(Latent Dirichlet Allocation)topic model for training.Finally,the text topic clustering is obtained.Experimental results show that for large data samples,the processing speed of LDA topic model clustering has been improved based Spark.At the same time,compared with the LDA topic model based on word frequency input,the model proposed in this paper has a reduction of perplexity.
基金supported by National Nature Science Foundation of China under Grant No.60905017,61072061National High Technical Research and Development Program of China(863 Program)under Grant No.2009AA01A346+1 种基金111 Project of China under Grant No.B08004the Special Project for Innovative Young Researchers of Beijing University of Posts and Telecommunications
文摘As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability through unsupervised feature selection.Theoretical analysis shows that the discrimination capability of a topic is limited by the discrimination capability of its representative words.The discrimination capability of a word is approximated by the Information Gain of the word for topics,which is used to distinguish between "general word" and "special word" in LDA topics.Therefore,we add a constraint to the LDA objective function to let the "general words" only happen in "general topics" other than "special topics".Then a heuristic algorithm is presented to get the solution.Experiments show that this method can not only improve the information gain of topics,but also make the topics easier to understand by human.
文摘Phishing is the act of attempting to steal a user’s financial and personal information, such as credit card numbers and passwords by pretending to be a trustworthy participant, during online communication. Attackers may direct the users to a fake website that could seem legitimate, and then gather useful and confidential information using that site. In order to protect users from Social Engineering techniques such as phishing, various measures have been developed, including improvement of Technical Security. In this paper, we propose a new technique, namely, “A Prediction Model for the Detection of Phishing e-mails using Topic Modelling, Named Entity Recognition and Image Processing”. The features extracted are Topic Modelling features, Named Entity features and Structural features. A multi-classifier prediction model is used to detect the phishing mails. Experimental results show that the multi-classification technique outperforms the single-classifier-based prediction techniques. The resultant accuracy of the detection of phishing e-mail is 99% with the highest False Positive Rate being 2.1%.
基金Natural Science Foundation of Guangdong Province,Grant/Award Number:2021A1515011339。
文摘With the progress and development of computer technology,applying machine learning methods to cancer research has become an important research field.To analyze the most recent research status and trends,main research topics,topic evolutions,research collaborations,and potential directions of this research field,this study conducts a bibliometric analysis on 6206 research articles worldwide collected from PubMed between 2011 and 2021 concerning cancer research using machine learning methods.Python is used as a tool for bibliometric analysis,Gephi is used for social network analysis,and the Latent Dirichlet Allocation model is used for topic modeling.The trend analysis of articles not only reflects the innovative research at the intersection of machine learning and cancer but also demonstrates its vigorous development and increasing impacts.In terms of journals,Nature Communications is the most influential journal and Scientific Reports is the most prolific one.The United States and Harvard University have contributed the most to cancer research using machine learning methods.As for the research topic,“Support Vector Machine,”“classification,”and“deep learning”have been the core focuses of the research field.Findings are helpful for scholars and related practitioners to better understand the development status and trends of cancer research using machine learning methods,as well as to have a deeper understanding of research hotspots.
基金This article is based on work supported by two grants from the National Science Foundation of the United States(under Grant Numbers 1620451 and 1945787).Any opinions,fndings,and conclusions or recommendations expressed in this article are those of the authors and do not necessarily refect the views of the National Science Foundation.
文摘Twitter can supply useful information on infrastructure impacts to the emergency managers during major disasters,but it is time consuming to filter through many irrelevant tweets.Previous studies have identified the types of messages that can be found on social media during disasters,but few solutions have been proposed to efficiently extract useful ones.We present a framework that can be applied in a timely manner to provide disaster impact information sourced from social media.The framework is tested on a well-studied and data-rich case of Hurricane Harvey.The procedures consist of filtering the raw Twitter data based on keywords,location,and tweet attributes,and then applying the latent Dirichlet allocation(LDA) to separate the tweets from the disaster affected area into categories(topics) useful to emergency managers.The LDA revealed that out of 24 topics found in the data,nine were directly related to disaster impacts-for example,outages,closures,flooded roads,and damaged infrastructure.Features such as frequent hashtags,mentions,URLs,and useful images were then extracted and analyzed.The relevant tweets,along with useful images,were correlated at the county level with flood depth,distributed disaster aid(damage),and population density.Significant correlations were found between the nine relevant topics and population density but not flood depth and damage,suggesting that more research into the suitability of social media data for disaster impacts modeling is needed.The results from this study provide baseline information for such efforts in the future.
基金Project supported by the National Natural Science Foundation of China(No.61602204)
文摘Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.
基金the Key Projects of Social Sciences of Anhui Provincial Department of Education(SK2018A1064,SK2018A1072)the Natural Scientific Project of Anhui Provincial Department of Education(KJ2019A0371)Innovation Team of Health Information Management and Application Research(BYKC201913),BBMC。
文摘The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and tracking of convection text data.However,the current topic recognition model tends to be based on a fixed number of topics K and lacks multi-granularity analysis of subject knowledge.Therefore,it is impossible to deeply perceive the dynamic change of the topic in the time series.By introducing a novel approach on the basis of Infinite Latent Dirichlet allocation model,a topic feature lattice under the dynamic topic number is constructed.In the model,documents,topics and vocabularies are jointly modeled to generate two probability distribution matrices:Documentstopics and topic-feature words.Afterwards,the association intensity is computed between the topic and its feature vocabulary to establish the topic formal context matrix.Finally,the topic feature is induced according to the formal concept analysis(FCA)theory.The topic feature lattice under dynamic topic number(TFL DTN)model is validated on the real dataset by comparing with the mainstream methods.Experiments show that this model is more in line with actual needs,and achieves better results in semi-automatic modeling of topic visualization analysis.