The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parall...The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parallel naive Bayes algorithm(PNBA)for Chinese text classification based on Spark,a parallel memory computing platform for big data.This algorithm has implemented parallel operation throughout the entire training and prediction process of naive Bayes classifier mainly by adopting the programming model of resilient distributed datasets(RDD).For comparison,a PNBA based on Hadoop is also implemented.The test results show that in the same computing environment and for the same text sets,the Spark PNBA is obviously superior to the Hadoop PNBA in terms of key indicators such as speedup ratio and scalability.Therefore,Spark-based parallel algorithms can better meet the requirement of large-scale Chinese text data mining.展开更多
A method is proposed to resolve the typical problem of air combat situation assessment. Taking the one-to-one air combat as an example and on the basis of air combat data recorded by the air combat maneuvering instrum...A method is proposed to resolve the typical problem of air combat situation assessment. Taking the one-to-one air combat as an example and on the basis of air combat data recorded by the air combat maneuvering instrument, the problem of air combat situation assessment is equivalent to the situation classification problem of air combat data. The fuzzy C-means clustering algorithm is proposed to cluster the selected air combat sample data and the situation classification of the data is determined by the data correlation analysis in combination with the clustering results and the pilots' description of the air combat process. On the basis of semi-supervised naive Bayes classifier, an improved algorithm is proposed based on data classification confidence, through which the situation classification of air combat data is carried out. The simulation results show that the improved algorithm can assess the air combat situation effectively and the improvement of the algorithm can promote the classification performance without significantly affecting the efficiency of the classifier.展开更多
With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectiv...With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.展开更多
The naive, Bayes (NB) model has been successfully used to tackle spare, and is very accurate. However, there is still room for improwment. We use a train on or near error (TONE) method in online NB to enhance the ...The naive, Bayes (NB) model has been successfully used to tackle spare, and is very accurate. However, there is still room for improwment. We use a train on or near error (TONE) method in online NB to enhance the perfornmnee of NB and reduce the number of training emails. We conducted an experiment to determine the performanee of the improved algorithm by plotting (I-ROCA)% curves. The resuhs show that the proposed method improves the performanee of original NB.展开更多
In recent years,with the increasing popularity of social networks,rumors have become more common.At present,the solution to rumors in social networks is mainly through media censorship and manual reporting,but this me...In recent years,with the increasing popularity of social networks,rumors have become more common.At present,the solution to rumors in social networks is mainly through media censorship and manual reporting,but this method requires a lot of manpower and material resources,and the cost is relatively high.Therefore,research on the characteristics of rumors and automatic identification and classification of network message text is of great significance.This paper uses the Naive Bayes algorithm combined with Laplacian smoothing to identify rumors in social network texts.The first is to segment the text and remove the stop words after the word segmentation is completed.Because of the data-sensitive nature of Naive Bayes,this paper performs text preprocessing on the input data.Then a naive Bayes classifier is constructed,and the Laplacian smoothing method is introduced to solve the problem of using the naive Bayes model to estimate the zero probability in rumor recognition.Finally,experiments show that the Naive Bayes algorithm combined with Laplace smoothing can effectively improve the accuracy of rumor recognition.展开更多
Spam is a universal problem with which everyone is familiar. A number of approaches are used for Spam filtering. The most common filtering technique is content-based filtering which uses the actual text of message to ...Spam is a universal problem with which everyone is familiar. A number of approaches are used for Spam filtering. The most common filtering technique is content-based filtering which uses the actual text of message to determine whether it is Spam or not. The content is very dynamic and it is very challenging to represent all information in a mathematical model of classification. For instance, in content-based Spam filtering, the characteristics used by the filter to identify Spam message are constantly changing over time. Na?ve Bayes method represents the changing nature of message using probability theory and support vector machine (SVM) represents those using different features. These two methods of classification are efficient in different domains and the case of Nepali SMS or Text classification has not yet been in consideration;these two methods do not consider the issue and it is interesting to find out the performance of both the methods in the problem of Nepali Text classification. In this paper, the Na?ve Bayes and SVM-based classification techniques are implemented to classify the Nepali SMS as Spam and non-Spam. An empirical analysis for various text cases has been done to evaluate accuracy measure of the classification methodologies used in this study. And, it is found to be 87.15% accurate in SVM and 92.74% accurate in the case of Na?ve Bayes.展开更多
The task of classifying opinions conveyed in any form of text online is referred to as sentiment analysis.The emergence of social media usage and its spread has given room for sentiment analysis in our daily lives.Soc...The task of classifying opinions conveyed in any form of text online is referred to as sentiment analysis.The emergence of social media usage and its spread has given room for sentiment analysis in our daily lives.Social media applications and websites have become the foremost spring of data recycled for reviews for sentimentality in various fields.Various subject matter can be encountered on social media platforms,such as movie product reviews,consumer opinions,and testimonies,among others,which can be used for sentiment analysis.The rapid uncovering of these web contents contains divergence of many benefits like profit-making,which is one of the most vital of them all.According to a recent study,81%of consumers conduct online research prior to making a purchase.But the reviews available online are too huge and numerous for human brains to process and analyze.Hence,machine learning classifiers are one of the prominent tools used to classify sentiment in order to get valuable information for use in companies like hotels,game companies,and so on.Understanding the sentiments of people towards different commodities helps to improve the services for contextual promotions,referral systems,and market research.Therefore,this study proposes a sentiment-based framework detection to enable the rapid uncovering of opinionated contents of hotel reviews.A Naive Bayes classifier was used to process and analyze the dataset for the detection of the polarity of the words.The dataset from Datafiniti’s Business Database obtained from Kaggle was used for the experiments in this study.The performance evaluation of the model shows a test accuracy of 96.08%,an F1-score of 96.00%,a precision of 96.00%,and a recall of 96.00%.The results were compared with state-of-the-art classifiers and showed a promising performance andmuch better in terms of performancemetrics.展开更多
Executing customer analysis in a systemic way is one of the possible solutions for each enterprise to understand the behavior of consumer patterns in an efficient and in-depth manner.Further investigation of customer p...Executing customer analysis in a systemic way is one of the possible solutions for each enterprise to understand the behavior of consumer patterns in an efficient and in-depth manner.Further investigation of customer patterns helps thefirm to develop efficient decisions and in turn,helps to optimize the enter-prise’s business and maximizes consumer satisfaction correspondingly.To con-duct an effective assessment about the customers,Naive Bayes(also called Simple Bayes),a machine learning model is utilized.However,the efficacious of the simple Bayes model is utterly relying on the consumer data used,and the existence of uncertain and redundant attributes in the consumer data enables the simple Bayes model to attain the worst prediction in consumer data because of its presumption regarding the attributes applied.However,in practice,the NB pre-mise is not true in consumer data,and the analysis of these redundant attributes enables simple Bayes model to get poor prediction results.In this work,an ensem-ble attribute selection methodology is performed to overcome the problem with consumer data and to pick a steady uncorrelated attribute set to model with the NB classifier.In ensemble variable selection,two different strategies are applied:one is based upon data perturbation(or homogeneous ensemble,same feature selector is applied to a different subsamples derived from the same learning set)and the other one is based upon function perturbation(or heterogeneous ensemble different feature selector is utilized to the same learning set).Further-more,the feature set captured from both ensemble strategies is applied to NB indi-vidually and the outcome obtained is computed.Finally,the experimental outcomes show that the proposed ensemble strategies perform efficiently in choosing a steady attribute set and increasing NB classification performance efficiently.展开更多
Classification can be regarded as dividing the data space into decision regions separated by decision boundaries.In this paper we analyze decision tree algorithms and the NBTree algorithm from this perspective.Thus,a ...Classification can be regarded as dividing the data space into decision regions separated by decision boundaries.In this paper we analyze decision tree algorithms and the NBTree algorithm from this perspective.Thus,a decision tree can be regarded as a classifier tree,in which each classifier on a non-root node is trained in decision regions of the classifier on the parent node.Meanwhile,the NBTree algorithm,which generates a classifier tree with the C4.5 algorithm and the naive Bayes classifier as the root and leaf classifiers respectively,can also be regarded as training naive Bayes classifiers in decision regions of the C4.5 algorithm.We propose a second division (SD) algorithm and three soft second division (SD-soft) algorithms to train classifiers in decision regions of the naive Bayes classifier.These four novel algorithms all generate two-level classifier trees with the naive Bayes classifier as root classifiers.The SD and three SD-soft algorithms can make good use of both the information contained in instances near decision boundaries,and those that may be ignored by the naive Bayes classifier.Finally,we conduct experiments on 30 data sets from the UC Irvine (UCI) repository.Experiment results show that the SD algorithm can obtain better generali-zation abilities than the NBTree and the averaged one-dependence estimators (AODE) algorithms when using the C4.5 algorithm and support vector machine (SVM) as leaf classifiers.Further experiments indicate that our three SD-soft algorithms can achieve better generalization abilities than the SD algorithm when argument values are selected appropriately.展开更多
The value difference metric (VDM) is one of the best-known and widely used distance functions for nominal attributes. This work applies the instance weighting technique to improve VDM. An instance weighted value dif...The value difference metric (VDM) is one of the best-known and widely used distance functions for nominal attributes. This work applies the instance weighting technique to improve VDM. An instance weighted value difference met- ric (IWVDM) is proposed here. Different from prior work, IWVDM uses naive Bayes (NB) to find weights for train- ing instances. Because early work has shown that there is a close relationship between VDM and NB, some work on NB can be applied to VDM. The weight of a training instance x, that belongs to the class c, is assigned according to the dif- ference between the estimated conditional probability P(c/x) by NB and the true conditional probability P(c/x), and the weight is adjusted iteratively. Compared with previous work, IWVDM has the advantage of reducing the time complex- ity of the process of finding weights, and simultaneously im- proving the performance of VDM. Experimental results on 36 UCI datasets validate the effectiveness of IWVDM.展开更多
Debris flow triggered by rainfall that accompanies a volcanic eruption is a serious secondary impact of a volcanic disaster.The probability of debris flow events can be estimated based on the prior information of rain...Debris flow triggered by rainfall that accompanies a volcanic eruption is a serious secondary impact of a volcanic disaster.The probability of debris flow events can be estimated based on the prior information of rainfall from historical and geomorphological data that are presumed to relate to debris flow occurrence.In this study,a debris flow disaster warning system was developed by applying the Na?¨ve Bayes Classifier(NBC).The spatial likelihood of the hazard is evaluated at a small subbasin scale by including high-resolution rainfall measurements from X-band polarimetric weather radar,a topographic factor,and soil type as predictors.The study was conducted in the Gendol River Basin of Mount Merapi,one of the most active volcanoes in Indonesia.Rainfall and debris flow occurrence data were collected for the upper Gendol River from October 2016 to February 2018 and divided into calibration and validation datasets.The NBC was used to estimate the status of debris flow incidences displayed in the susceptibility map that is based on the posterior probability from the predictors.The system verification was performed by quantitative dichotomous quality indices along with a contingency table.Using the validation datasets,the advantage of the NBC for estimating debris flow occurrence is confirmed.This work contributes to existing knowledge on estimating debris flow susceptibility through the data mining approach.Despite the existence of predictive uncertainty,the presented system could contribute to the improvement of debris flow countermeasures in volcanic regions.展开更多
Based on the lung adenocarcinoma(LUAD)gene expression data from the cancer genome atlas(TCGA)database,the Stromal score,Immune score and Estimate score in tumor microenvironment(TME)were computed by the Estimation of ...Based on the lung adenocarcinoma(LUAD)gene expression data from the cancer genome atlas(TCGA)database,the Stromal score,Immune score and Estimate score in tumor microenvironment(TME)were computed by the Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data(ESTIMATE)algorithm.And gene modules significantly related to the three scores were identified by weighted gene coexpression network analysis(WGCNA).Based on the correlation coefficients and P values,899 key genes affecting tumor microenvironment were obtained by selecting the two most correlated modules.It was suggested through Gene Ontology(GO)and Kyoto Encyclopedia of Genes and Genomes(KEGG)enrichment analysis that these key genes were significantly involved in immune-related or cancer-related terms.Through univariate cox regression and elastic network analysis,genes associated with prognosis of the LUAD patients were screened out and their prognostic values were further verified by the survival analysis and the University of ALabama at Birmingham CANcer(UALCAN)database.The results indicated that eight genes were significantly related to the overall survival of LUAD.Among them,six genes were found differentially expressed between tumor and control samples.And immune infiltration analysis further verified that all the six genes were significantly related to tumor purity and immune cells.Therefore,these genes were used eventually for constructing a Naive Bayes projection model of LUAD.The model was verified by the receiver operating characteristic(ROC)curve where the area under curve(AUC)reached 92.03%,which suggested that the model could discriminate the tumor samples from the normal accurately.Our study provided an effective model for LUAD projection which improved the clinical diagnosis and cure of LUAD.The result also confirmed that the six genes in the model construction could be the potential prognostic biomarkers of LUAD.展开更多
Naive Bayes(NB) is one of the most popular classification methods. It is particularly useful when the dimension of the predictor is high and data are generated independently. In the meanwhile, social network data are ...Naive Bayes(NB) is one of the most popular classification methods. It is particularly useful when the dimension of the predictor is high and data are generated independently. In the meanwhile, social network data are becoming increasingly accessible, due to the fast development of various social network services and websites. By contrast, data generated by a social network are most likely to be dependent. The dependency is mainly determined by their social network relationships. Then, how to extend the classical NB method to social network data becomes a problem of great interest. To this end, we propose here a network-based naive Bayes(NNB) method, which generalizes the classical NB model to social network data. The key advantage of the NNB method is that it takes the network relationships into consideration. The computational efficiency makes the NNB method even feasible in large scale social networks. The statistical properties of the NNB model are theoretically investigated. Simulation studies have been conducted to demonstrate its finite sample performance.A real data example is also analyzed for illustration purpose.展开更多
An important problem in wireless communication networks (WCNs) is that they have a minimum number of resources, which leads to high-security threats. An approach to find and detect the attacks is the intrusion detecti...An important problem in wireless communication networks (WCNs) is that they have a minimum number of resources, which leads to high-security threats. An approach to find and detect the attacks is the intrusion detection system (IDS). In this paper, the fuzzy lion Bayes system (FLBS) is proposed for intrusion detection mechanism. Initially, the data set is grouped into a number of clusters by the fuzzy clustering algorithm. Here, the Naive Bayes classifier is integrated with the lion optimization algorithm and the new lion naive Bayes (LNB) is created for optimally generating the probability measures. Then, the LNB model is applied to each data group, and the aggregated data is generated. After generating the aggregated data, the LNB model is applied to the aggregated data, and the abnormal nodes are identified based on the posterior probability function. The performance of the proposed FLBS system is evaluated using the KDD Cup 99 data and the comparative analysis is performed by the existing methods for the evaluation metrics accuracy and false acceptance rate (FAR). From the experimental results, it can be shown that the proposed system has the maximum performance, which shows the effectiveness of the proposed system in the intrusion detection.展开更多
Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whe...Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive,negative,neutral,or any other personal emotion to understand the sentiment context of the text.Sentiment analysis is essential in business and society because it impacts strategic decision-making.Sentiment analysis involves challenges due to lexical variation,an unlabeled dataset,and text distance correlations.The execution time increases due to the sequential processing of the sequence models.However,the calculation times for the Transformer models are reduced because of the parallel processing.This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations.In particular,the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers(BERT)attention(DeBERTa)and the Gated Recurrent Unit(GRU)for sentiment analysis.Using the Decoding-enhanced BERT technique,the words are mapped into a compact,semantic word embedding space,and the Gated Recurrent Unit model can capture the distance contextual semantics correctly.The proposed hybrid model achieves F1-scores of 97%on the Twitter Large Language Model(LLM)dataset,which is much higher than the performance of new techniques.展开更多
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar...Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.展开更多
The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptio...The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability.展开更多
An electric vehicle is becoming one of the popular choices when choosing a vehicle.People are generally impressed with electric vehicles’zero-emission and smooth drives,while unstable battery duration keeps people aw...An electric vehicle is becoming one of the popular choices when choosing a vehicle.People are generally impressed with electric vehicles’zero-emission and smooth drives,while unstable battery duration keeps people away.This study tries to identify the primary factors that affect the likelihood of owning an electric vehicle based on different income levels.We divide the dataset into three subgroups by household income from$50,000 to$150,000 or low-medium income level,$150,000 to$250,000 or medium-high income level,and$250,000 or above,the high-income level.We considered several machine learning classifiers,and naive Bayes gave us a relatively higher accuracy than other algorithms in terms of overall accuracy and F1 scores.Based on the probability analysis,we found that for each of these groups,one-way commuting distance is the most important for all three income levels.展开更多
基金Project(KC18071)supported by the Application Foundation Research Program of Xuzhou,ChinaProjects(2017YFC0804401,2017YFC0804409)supported by the National Key R&D Program of China
文摘The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parallel naive Bayes algorithm(PNBA)for Chinese text classification based on Spark,a parallel memory computing platform for big data.This algorithm has implemented parallel operation throughout the entire training and prediction process of naive Bayes classifier mainly by adopting the programming model of resilient distributed datasets(RDD).For comparison,a PNBA based on Hadoop is also implemented.The test results show that in the same computing environment and for the same text sets,the Spark PNBA is obviously superior to the Hadoop PNBA in terms of key indicators such as speedup ratio and scalability.Therefore,Spark-based parallel algorithms can better meet the requirement of large-scale Chinese text data mining.
基金supported by the Aviation Science Foundation of China(20152096019)
文摘A method is proposed to resolve the typical problem of air combat situation assessment. Taking the one-to-one air combat as an example and on the basis of air combat data recorded by the air combat maneuvering instrument, the problem of air combat situation assessment is equivalent to the situation classification problem of air combat data. The fuzzy C-means clustering algorithm is proposed to cluster the selected air combat sample data and the situation classification of the data is determined by the data correlation analysis in combination with the clustering results and the pilots' description of the air combat process. On the basis of semi-supervised naive Bayes classifier, an improved algorithm is proposed based on data classification confidence, through which the situation classification of air combat data is carried out. The simulation results show that the improved algorithm can assess the air combat situation effectively and the improvement of the algorithm can promote the classification performance without significantly affecting the efficiency of the classifier.
基金supported by the National Natural Science Foundation of China(62033008,61873143)。
文摘With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.
基金supported by National Natural Science Foundation of China under Grant NO. 60903083Research fund for the doctoral program of higher education of China under Grant NO.20092303120005the Research Fund of ZTE Corporation
文摘The naive, Bayes (NB) model has been successfully used to tackle spare, and is very accurate. However, there is still room for improwment. We use a train on or near error (TONE) method in online NB to enhance the perfornmnee of NB and reduce the number of training emails. We conducted an experiment to determine the performanee of the improved algorithm by plotting (I-ROCA)% curves. The resuhs show that the proposed method improves the performanee of original NB.
文摘In recent years,with the increasing popularity of social networks,rumors have become more common.At present,the solution to rumors in social networks is mainly through media censorship and manual reporting,but this method requires a lot of manpower and material resources,and the cost is relatively high.Therefore,research on the characteristics of rumors and automatic identification and classification of network message text is of great significance.This paper uses the Naive Bayes algorithm combined with Laplacian smoothing to identify rumors in social network texts.The first is to segment the text and remove the stop words after the word segmentation is completed.Because of the data-sensitive nature of Naive Bayes,this paper performs text preprocessing on the input data.Then a naive Bayes classifier is constructed,and the Laplacian smoothing method is introduced to solve the problem of using the naive Bayes model to estimate the zero probability in rumor recognition.Finally,experiments show that the Naive Bayes algorithm combined with Laplace smoothing can effectively improve the accuracy of rumor recognition.
文摘Spam is a universal problem with which everyone is familiar. A number of approaches are used for Spam filtering. The most common filtering technique is content-based filtering which uses the actual text of message to determine whether it is Spam or not. The content is very dynamic and it is very challenging to represent all information in a mathematical model of classification. For instance, in content-based Spam filtering, the characteristics used by the filter to identify Spam message are constantly changing over time. Na?ve Bayes method represents the changing nature of message using probability theory and support vector machine (SVM) represents those using different features. These two methods of classification are efficient in different domains and the case of Nepali SMS or Text classification has not yet been in consideration;these two methods do not consider the issue and it is interesting to find out the performance of both the methods in the problem of Nepali Text classification. In this paper, the Na?ve Bayes and SVM-based classification techniques are implemented to classify the Nepali SMS as Spam and non-Spam. An empirical analysis for various text cases has been done to evaluate accuracy measure of the classification methodologies used in this study. And, it is found to be 87.15% accurate in SVM and 92.74% accurate in the case of Na?ve Bayes.
文摘The task of classifying opinions conveyed in any form of text online is referred to as sentiment analysis.The emergence of social media usage and its spread has given room for sentiment analysis in our daily lives.Social media applications and websites have become the foremost spring of data recycled for reviews for sentimentality in various fields.Various subject matter can be encountered on social media platforms,such as movie product reviews,consumer opinions,and testimonies,among others,which can be used for sentiment analysis.The rapid uncovering of these web contents contains divergence of many benefits like profit-making,which is one of the most vital of them all.According to a recent study,81%of consumers conduct online research prior to making a purchase.But the reviews available online are too huge and numerous for human brains to process and analyze.Hence,machine learning classifiers are one of the prominent tools used to classify sentiment in order to get valuable information for use in companies like hotels,game companies,and so on.Understanding the sentiments of people towards different commodities helps to improve the services for contextual promotions,referral systems,and market research.Therefore,this study proposes a sentiment-based framework detection to enable the rapid uncovering of opinionated contents of hotel reviews.A Naive Bayes classifier was used to process and analyze the dataset for the detection of the polarity of the words.The dataset from Datafiniti’s Business Database obtained from Kaggle was used for the experiments in this study.The performance evaluation of the model shows a test accuracy of 96.08%,an F1-score of 96.00%,a precision of 96.00%,and a recall of 96.00%.The results were compared with state-of-the-art classifiers and showed a promising performance andmuch better in terms of performancemetrics.
文摘Executing customer analysis in a systemic way is one of the possible solutions for each enterprise to understand the behavior of consumer patterns in an efficient and in-depth manner.Further investigation of customer patterns helps thefirm to develop efficient decisions and in turn,helps to optimize the enter-prise’s business and maximizes consumer satisfaction correspondingly.To con-duct an effective assessment about the customers,Naive Bayes(also called Simple Bayes),a machine learning model is utilized.However,the efficacious of the simple Bayes model is utterly relying on the consumer data used,and the existence of uncertain and redundant attributes in the consumer data enables the simple Bayes model to attain the worst prediction in consumer data because of its presumption regarding the attributes applied.However,in practice,the NB pre-mise is not true in consumer data,and the analysis of these redundant attributes enables simple Bayes model to get poor prediction results.In this work,an ensem-ble attribute selection methodology is performed to overcome the problem with consumer data and to pick a steady uncorrelated attribute set to model with the NB classifier.In ensemble variable selection,two different strategies are applied:one is based upon data perturbation(or homogeneous ensemble,same feature selector is applied to a different subsamples derived from the same learning set)and the other one is based upon function perturbation(or heterogeneous ensemble different feature selector is utilized to the same learning set).Further-more,the feature set captured from both ensemble strategies is applied to NB indi-vidually and the outcome obtained is computed.Finally,the experimental outcomes show that the proposed ensemble strategies perform efficiently in choosing a steady attribute set and increasing NB classification performance efficiently.
基金supported by the National Natural Science Foundation of China (No.60970081)the National Basic Research Program (973) of China (No.2010CB327903)
文摘Classification can be regarded as dividing the data space into decision regions separated by decision boundaries.In this paper we analyze decision tree algorithms and the NBTree algorithm from this perspective.Thus,a decision tree can be regarded as a classifier tree,in which each classifier on a non-root node is trained in decision regions of the classifier on the parent node.Meanwhile,the NBTree algorithm,which generates a classifier tree with the C4.5 algorithm and the naive Bayes classifier as the root and leaf classifiers respectively,can also be regarded as training naive Bayes classifiers in decision regions of the C4.5 algorithm.We propose a second division (SD) algorithm and three soft second division (SD-soft) algorithms to train classifiers in decision regions of the naive Bayes classifier.These four novel algorithms all generate two-level classifier trees with the naive Bayes classifier as root classifiers.The SD and three SD-soft algorithms can make good use of both the information contained in instances near decision boundaries,and those that may be ignored by the naive Bayes classifier.Finally,we conduct experiments on 30 data sets from the UC Irvine (UCI) repository.Experiment results show that the SD algorithm can obtain better generali-zation abilities than the NBTree and the averaged one-dependence estimators (AODE) algorithms when using the C4.5 algorithm and support vector machine (SVM) as leaf classifiers.Further experiments indicate that our three SD-soft algorithms can achieve better generalization abilities than the SD algorithm when argument values are selected appropriately.
文摘The value difference metric (VDM) is one of the best-known and widely used distance functions for nominal attributes. This work applies the instance weighting technique to improve VDM. An instance weighted value difference met- ric (IWVDM) is proposed here. Different from prior work, IWVDM uses naive Bayes (NB) to find weights for train- ing instances. Because early work has shown that there is a close relationship between VDM and NB, some work on NB can be applied to VDM. The weight of a training instance x, that belongs to the class c, is assigned according to the dif- ference between the estimated conditional probability P(c/x) by NB and the true conditional probability P(c/x), and the weight is adjusted iteratively. Compared with previous work, IWVDM has the advantage of reducing the time complex- ity of the process of finding weights, and simultaneously im- proving the performance of VDM. Experimental results on 36 UCI datasets validate the effectiveness of IWVDM.
基金supported by the Science and Technology Research Partnership for Sustainable Development(SATREPS)Japan Science and Technology Agency(JST)the Japan International Cooperation Agency(JICA)
文摘Debris flow triggered by rainfall that accompanies a volcanic eruption is a serious secondary impact of a volcanic disaster.The probability of debris flow events can be estimated based on the prior information of rainfall from historical and geomorphological data that are presumed to relate to debris flow occurrence.In this study,a debris flow disaster warning system was developed by applying the Na?¨ve Bayes Classifier(NBC).The spatial likelihood of the hazard is evaluated at a small subbasin scale by including high-resolution rainfall measurements from X-band polarimetric weather radar,a topographic factor,and soil type as predictors.The study was conducted in the Gendol River Basin of Mount Merapi,one of the most active volcanoes in Indonesia.Rainfall and debris flow occurrence data were collected for the upper Gendol River from October 2016 to February 2018 and divided into calibration and validation datasets.The NBC was used to estimate the status of debris flow incidences displayed in the susceptibility map that is based on the posterior probability from the predictors.The system verification was performed by quantitative dichotomous quality indices along with a contingency table.Using the validation datasets,the advantage of the NBC for estimating debris flow occurrence is confirmed.This work contributes to existing knowledge on estimating debris flow susceptibility through the data mining approach.Despite the existence of predictive uncertainty,the presented system could contribute to the improvement of debris flow countermeasures in volcanic regions.
基金Our deepest gratitude goes to the editors and anonymous reviewers for their careful work and thoughtful suggestions that have helped to improve this paper substantially.The workwas supported by the National Natural Science Foundation of China(No.12071382)the Bowang scholar youth talent program(Zhiqiang Ye)of Chongqing Normal University,the Natural Science and Engineering Research Council of Canada,and the Canada Research Chair Program(JWu).
文摘Based on the lung adenocarcinoma(LUAD)gene expression data from the cancer genome atlas(TCGA)database,the Stromal score,Immune score and Estimate score in tumor microenvironment(TME)were computed by the Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data(ESTIMATE)algorithm.And gene modules significantly related to the three scores were identified by weighted gene coexpression network analysis(WGCNA).Based on the correlation coefficients and P values,899 key genes affecting tumor microenvironment were obtained by selecting the two most correlated modules.It was suggested through Gene Ontology(GO)and Kyoto Encyclopedia of Genes and Genomes(KEGG)enrichment analysis that these key genes were significantly involved in immune-related or cancer-related terms.Through univariate cox regression and elastic network analysis,genes associated with prognosis of the LUAD patients were screened out and their prognostic values were further verified by the survival analysis and the University of ALabama at Birmingham CANcer(UALCAN)database.The results indicated that eight genes were significantly related to the overall survival of LUAD.Among them,six genes were found differentially expressed between tumor and control samples.And immune infiltration analysis further verified that all the six genes were significantly related to tumor purity and immune cells.Therefore,these genes were used eventually for constructing a Naive Bayes projection model of LUAD.The model was verified by the receiver operating characteristic(ROC)curve where the area under curve(AUC)reached 92.03%,which suggested that the model could discriminate the tumor samples from the normal accurately.Our study provided an effective model for LUAD projection which improved the clinical diagnosis and cure of LUAD.The result also confirmed that the six genes in the model construction could be the potential prognostic biomarkers of LUAD.
基金supported by National Natural Science Foundation of China (Grant Nos. 11701560, 11501093, 11631003, 11690012, 71532001 and 11525101)the Fundamental Research Funds for the Central Universities+5 种基金the Fundamental Research Funds for the Central Universities (Grant Nos. 130028613, 130028729 and 2412017FZ030)the Research Funds of Renmin University of China (Grant No. 16XNLF01)the Beijing Municipal Social Science Foundation (Grant No. 17GLC051)Fund for Building World-Class Universities (Disciplines) of Renmin University of ChinaChina’s National Key Research Special Program (Grant No. 2016YFC0207700)Center for Statistical Science at Peking University
文摘Naive Bayes(NB) is one of the most popular classification methods. It is particularly useful when the dimension of the predictor is high and data are generated independently. In the meanwhile, social network data are becoming increasingly accessible, due to the fast development of various social network services and websites. By contrast, data generated by a social network are most likely to be dependent. The dependency is mainly determined by their social network relationships. Then, how to extend the classical NB method to social network data becomes a problem of great interest. To this end, we propose here a network-based naive Bayes(NNB) method, which generalizes the classical NB model to social network data. The key advantage of the NNB method is that it takes the network relationships into consideration. The computational efficiency makes the NNB method even feasible in large scale social networks. The statistical properties of the NNB model are theoretically investigated. Simulation studies have been conducted to demonstrate its finite sample performance.A real data example is also analyzed for illustration purpose.
文摘An important problem in wireless communication networks (WCNs) is that they have a minimum number of resources, which leads to high-security threats. An approach to find and detect the attacks is the intrusion detection system (IDS). In this paper, the fuzzy lion Bayes system (FLBS) is proposed for intrusion detection mechanism. Initially, the data set is grouped into a number of clusters by the fuzzy clustering algorithm. Here, the Naive Bayes classifier is integrated with the lion optimization algorithm and the new lion naive Bayes (LNB) is created for optimally generating the probability measures. Then, the LNB model is applied to each data group, and the aggregated data is generated. After generating the aggregated data, the LNB model is applied to the aggregated data, and the abnormal nodes are identified based on the posterior probability function. The performance of the proposed FLBS system is evaluated using the KDD Cup 99 data and the comparative analysis is performed by the existing methods for the evaluation metrics accuracy and false acceptance rate (FAR). From the experimental results, it can be shown that the proposed system has the maximum performance, which shows the effectiveness of the proposed system in the intrusion detection.
文摘Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive,negative,neutral,or any other personal emotion to understand the sentiment context of the text.Sentiment analysis is essential in business and society because it impacts strategic decision-making.Sentiment analysis involves challenges due to lexical variation,an unlabeled dataset,and text distance correlations.The execution time increases due to the sequential processing of the sequence models.However,the calculation times for the Transformer models are reduced because of the parallel processing.This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations.In particular,the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers(BERT)attention(DeBERTa)and the Gated Recurrent Unit(GRU)for sentiment analysis.Using the Decoding-enhanced BERT technique,the words are mapped into a compact,semantic word embedding space,and the Gated Recurrent Unit model can capture the distance contextual semantics correctly.The proposed hybrid model achieves F1-scores of 97%on the Twitter Large Language Model(LLM)dataset,which is much higher than the performance of new techniques.
基金This researchwork is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R411),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.
文摘The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability.
文摘An electric vehicle is becoming one of the popular choices when choosing a vehicle.People are generally impressed with electric vehicles’zero-emission and smooth drives,while unstable battery duration keeps people away.This study tries to identify the primary factors that affect the likelihood of owning an electric vehicle based on different income levels.We divide the dataset into three subgroups by household income from$50,000 to$150,000 or low-medium income level,$150,000 to$250,000 or medium-high income level,and$250,000 or above,the high-income level.We considered several machine learning classifiers,and naive Bayes gave us a relatively higher accuracy than other algorithms in terms of overall accuracy and F1 scores.Based on the probability analysis,we found that for each of these groups,one-way commuting distance is the most important for all three income levels.