<span style="font-family:Verdana;">The presence of bearing faults reduces the efficiency of rotating machines and thus increases energy consumption or even the total stoppage of the machine. </span&...<span style="font-family:Verdana;">The presence of bearing faults reduces the efficiency of rotating machines and thus increases energy consumption or even the total stoppage of the machine. </span><span style="font-family:Verdana;">It becomes essential to correctly diagnose the fault caused by the bearing.</span><span style="font-family:Verdana;"> Hence the importance of determining an effective features extraction method that best describes the fault. The vision of this paper is to merge the features selection methods in order to define the most relevant featuresin the texture </span><span style="font-family:Verdana;">of the vibration signal images. In this study, the Gray Level Co-occurrence </span><span style="font-family:Verdana;">Matrix (GLCM) in texture analysis is applied on the vibration signal represented in images. Features</span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">selection based on the merge of PCA (Principal component Analysis) method and SFE (Sequential Features Extraction) method is </span><span style="font-family:Verdana;">done to obtain the most relevant features. The multiclass-Na<span style="white-space:nowrap;">?</span>ve Bayesclassifi</span><span style="font-family:Verdana;">er is used to test the proposed approach. The success rate of this classification is 98.27%. The relevant features obtained give promising results and are more efficient than the methods observed in the literature.</span></span></span></span>展开更多
This paper proposed an improved Naïve Bayes Classifier for sentimental analysis from a large-scale dataset such as in YouTube.YouTube contains large unstructured and unorganized comments and reactions,which carry...This paper proposed an improved Naïve Bayes Classifier for sentimental analysis from a large-scale dataset such as in YouTube.YouTube contains large unstructured and unorganized comments and reactions,which carry important information.Organizing large amounts of data and extracting useful information is a challenging task.The extracted information can be considered as new knowledge and can be used for deci sion-making.We extract comments from YouTube on videos and categorized them in domain-specific,and then apply the Naïve Bayes classifier with improved techniques.Our method provided a decent 80%accuracy in classifying those comments.This experiment shows that the proposed method provides excellent adaptability for large-scale text classification.展开更多
As the importance of email increases,the amount of malicious email is also increasing,so the need for malicious email filtering is growing.Since it is more economical to combine commodity hardware consisting of a medi...As the importance of email increases,the amount of malicious email is also increasing,so the need for malicious email filtering is growing.Since it is more economical to combine commodity hardware consisting of a medium server or PC with a virtual environment to use as a single server resource and filter malicious email using machine learning techniques,we used a Hadoop MapReduce framework and Naïve Bayes among machine learning methods for malicious email filtering.Naïve Bayes was selected because it is one of the top machine learning methods(Support Vector Machine(SVM),Naïve Bayes,K-Nearest Neighbor(KNN),and Decision Tree)in terms of execution time and accuracy.Malicious email was filtered with MapReduce programming using the Naïve Bayes technique,which is a supervised machine learning method,in a Hadoop framework with optimized performance and also with the Python program technique with the Naïve Bayes technique applied in a bare metal server environment with the Hadoop environment not applied.According to the results of a comparison of the accuracy and predictive error rates of the two methods,the Hadoop MapReduce Naïve Bayes method improved the accuracy of spam and ham email identification 1.11 times and the prediction error rate 14.13 times compared to the non-Hadoop Python Naïve Bayes method.展开更多
The naïve Bayes classifier is one of the commonly used data mining methods for classification.Despite its simplicity,naïve Bayes is effective and computationally efficient.Although the strong attribute indep...The naïve Bayes classifier is one of the commonly used data mining methods for classification.Despite its simplicity,naïve Bayes is effective and computationally efficient.Although the strong attribute independence assumption in the naïve Bayes classifier makes it a tractable method for learning,this assumption may not hold in real-world applications.Many enhancements to the basic algorithm have been proposed in order to alleviate the violation of attribute independence assumption.While these methods improve the classification performance,they do not necessarily retain the mathematical structure of the naïve Bayes model and some at the expense of computational time.One approach to reduce the naïvetéof the classifier is to incorporate attribute weights in the conditional probability.In this paper,we proposed a method to incorporate attribute weights to naïve Bayes.To evaluate the performance of our method,we used the public benchmark datasets.We compared our method with the standard naïve Bayes and baseline attribute weighting methods.Experimental results show that our method to incorporate attribute weights improves the classification performance compared to both standard naïve Bayes and baseline attribute weighting methods in terms of classification accuracy and F1,especially when the independence assumption is strongly violated,which was validated using the Chi-square test of independence.展开更多
Classification model has received great attention in any domain of research and also a reliable tool for medical disease diagnosis. The domain of classification model is used in disease diagnosis, disease prediction, ...Classification model has received great attention in any domain of research and also a reliable tool for medical disease diagnosis. The domain of classification model is used in disease diagnosis, disease prediction, bio informatics, crime prediction and so on. However, an efficient disease diagnosis model was compromised the disease prediction. In this paper, a Rough Set Rule-based Multitude Classifier (RS-RMC) is developed to improve the disease prediction rate and enhance the class accuracy of disease being diagnosed. The RS-RMC involves two steps. Initially, a Rough Set model is used for Feature Selection aiming at minimizing the execution time for obtaining the disease feature set. A Multitude Classifier model is presented in second step for detection of heart disease and for efficient classification. The Na?ve Bayes Classifier algorithm is designed for efficient identification of classes to measure the relationship between disease features and improving disease prediction rate. Experimental analysis shows that RS-RMC is used to reduce the execution time for extracting the disease feature with minimum false positive rate compared to the state-of-the-art works.展开更多
The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that...The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that only fresh and high-quality fruits are sold to consumers.The impact of rotten fruits can foster harmful bacteria,molds and other microorganisms that can cause food poisoning and other illnesses to the consumers.The overall purpose of the study is to classify rotten fruits,which can affect the taste,texture,and appearance of other fresh fruits,thereby reducing their shelf life.The agriculture and food industries are increasingly adopting computer vision technology to detect rotten fruits and forecast their shelf life.Hence,this research work mainly focuses on the Convolutional Neural Network’s(CNN)deep learning model,which helps in the classification of rotten fruits.The proposed methodology involves real-time analysis of a dataset of various types of fruits,including apples,bananas,oranges,papayas and guavas.Similarly,machine learningmodels such as GaussianNaïve Bayes(GNB)and random forest are used to predict the fruit’s shelf life.The results obtained from the various pre-trained models for rotten fruit detection are analysed based on an accuracy score to determine the best model.In comparison to other pre-trained models,the visual geometry group16(VGG16)obtained a higher accuracy score of 95%.Likewise,the random forest model delivers a better accuracy score of 88% when compared with GNB in forecasting the fruit’s shelf life.By developing an accurate classification model,only fresh and safe fruits reach consumers,reducing the risks associated with contaminated produce.Thereby,the proposed approach will have a significant impact on the food industry for efficient fruit distribution and also benefit customers to purchase fresh fruits.展开更多
Intrusion detection is the investigation process of information about the system activities or its data to detect any malicious behavior or unauthorized activity.Most of the IDS implement K-means clustering technique ...Intrusion detection is the investigation process of information about the system activities or its data to detect any malicious behavior or unauthorized activity.Most of the IDS implement K-means clustering technique due to its linear complexity and fast computing ability.Nonetheless,it is Naïve use of the mean data value for the cluster core that presents a major drawback.The chances of two circular clusters having different radius and centering at the same mean will occur.This condition cannot be addressed by the K-means algorithm because the mean value of the various clusters is very similar together.However,if the clusters are not spherical,it fails.To overcome this issue,a new integrated hybrid model by integrating expectation maximizing(EM)clustering using a Gaussian mixture model(GMM)and naïve Bays classifier have been proposed.In this model,GMM give more flexibility than K-Means in terms of cluster covariance.Also,they use probabilities function and soft clustering,that’s why they can have multiple cluster for a single data.In GMM,we can define the cluster form in GMM by two parameters:the mean and the standard deviation.This means that by using these two parameters,the cluster can take any kind of elliptical shape.EM-GMM will be used to cluster data based on data activity into the corresponding category.展开更多
The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a ...The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a major issue in the city,municipalities in the aspects of the environment,and logistics.An efficient supply chain with edge computing technology is used in managing medical waste.The supply chain operations include processing of waste collec-tion,transportation,and disposal of waste.Many research works have been applied to improve the management of wastage.The main issues in the existing techniques are ineffective and expensive and centralized edge computing which leads to failure in providing security,trustworthiness,and transparency.To over-come these issues,in this paper we implement an efficient Naive Bayes classifier algorithm and Q-Learning algorithm in decentralized edge computing technology with a binary bat optimization algorithm(NBQ-BBOA).This proposed work is used to track,detect,and manage medical waste.To minimize the transferring cost of medical wastage from various nodes,the Q-Learning algorithm is used.The accuracy obtained for the Naïve Bayes algorithm is 88%,the Q-Learning algo-rithm is 82%and NBQ-BBOA is 98%.The error rate of Root Mean Square Error(RMSE)and Mean Error(MAE)for the proposed work NBQ-BBOA are 0.012 and 0.045.展开更多
Machine learning algorithms (MLs) can potentially improve disease diagnostics, leading to early detection and treatment of these diseases. As a malignant tumor whose primary focus is located in the bronchial mucosal e...Machine learning algorithms (MLs) can potentially improve disease diagnostics, leading to early detection and treatment of these diseases. As a malignant tumor whose primary focus is located in the bronchial mucosal epithelium, lung cancer has the highest mortality and morbidity among cancer types, threatening health and life of patients suffering from the disease. Machine learning algorithms such as Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Naïve Bayes (NB) have been used for lung cancer prediction. However they still face challenges such as high dimensionality of the feature space, over-fitting, high computational complexity, noise and missing data, low accuracies, low precision and high error rates. Ensemble learning, which combines classifiers, may be helpful to boost prediction on new data. However, current ensemble ML techniques rarely consider comprehensive evaluation metrics to evaluate the performance of individual classifiers. The main purpose of this study was to develop an ensemble classifier that improves lung cancer prediction. An ensemble machine learning algorithm is developed based on RF, SVM, NB, and KNN. Feature selection is done based on Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). This algorithm is then executed on lung cancer data and evaluated using execution time, true positives (TP), true negatives (TN), false positives (FP), false negatives (FN), false positive rate (FPR), recall (R), precision (P) and F-measure (FM). Experimental results show that the proposed ensemble classifier has the best classification of 0.9825% with the lowest error rate of 0.0193. This is followed by SVM in which the probability of having the best classification is 0.9652% at an error rate of 0.0206. On the other hand, NB had the worst performance of 0.8475% classification at 0.0738 error rate.展开更多
Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for ...Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for writing.The communication using the Roman characters,which are used in the script of Urdu language on social media,is now considered the most typical standard of communication in an Indian landmass that makes it an expensive information supply.English Text classification is a solved problem but there have been only a few efforts to examine the rich information supply of Roman Urdu in the past.This is due to the numerous complexities involved in the processing of Roman Urdu data.The complexities associated with Roman Urdu include the non-availability of the tagged corpus,lack of a set of rules,and lack of standardized spellings.A large amount of Roman Urdu news data is available on mainstream news websites and social media websites like Facebook,Twitter but meaningful information can only be extracted if data is in a structured format.We have developed a Roman Urdu news headline classifier,which will help to classify news into relevant categories on which further analysis and modeling can be done.The author of this research aims to develop the Roman Urdu news classifier,which will classify the news into five categories(health,business,technology,sports,international).First,we will develop the news dataset using scraping tools and then after preprocessing,we will compare the results of different machine learning algorithms like Logistic Regression(LR),Multinomial Naïve Bayes(MNB),Long short term memory(LSTM),and Convolutional Neural Network(CNN).After this,we will use a phonetic algorithm to control lexical variation and test news from different websites.The preliminary results suggest that a more accurate classification can be accomplished by monitoring noise inside data and by classifying the news.After applying above mentioned different machine learning algorithms,results have shown that Multinomial Naïve Bayes classifier is giving the best accuracy of 90.17%which is due to the noise lexical variation.展开更多
Most human deaths are caused by heart diseases.Such diseases cannot be efficiently detected for the lack of specialized knowledge and experience.Data science is important in healthcare sector for the role it plays in ...Most human deaths are caused by heart diseases.Such diseases cannot be efficiently detected for the lack of specialized knowledge and experience.Data science is important in healthcare sector for the role it plays in bulk data processing.Machine learning(ML)also plays a significant part in disease prediction and decision⁃making in medical care industry.This study reviews and evaluates the ML approaches applied in heart disease detection.The primary goal is to find mathematically effective ML algorithm to predict heart diseases more accurately.Various ML approaches including Logistic Regression,Support Vector Machine(SVM),k⁃Nearest Neighbor(k⁃NN),t⁃Distributed Stochastic Neighbor Embedding(t⁃SNE),Naïve Bayes,and Random Forest were utilized to process heart disease dataset and extract the unknown patterns of heart disease detection.An analysis was conducted on their performance to examine the effecacy and efficiency.The results show that Random Forest out⁃performed other ML algorithms with an accuracy of 97%.展开更多
Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims a...Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims and society as a whole.It results from a misunderstanding regarding freedom of speech.In this work,we proposed a methodology for detecting such behaviors(bullying,harassment,and hate-related texts)using supervised machine learning algo-rithms(SVM,Naïve Bayes,Logistic regression,and random forest)and for predicting a topic associated with these text data using unsupervised natural language processing,such as latent Dirichlet allocation.In addition,we used accuracy,precision,recall,and F1 score to assess prior classifiers.Results show that the use of logistic regression,support vector machine,random forest model,and Naïve Bayes has 95%,94.97%,94.66%,and 93.1%accuracy,respectively.展开更多
The exponential pace of the spread of the digital world has served as one of the assisting forces to generate an enormous amount of informationflow-ing over the network.The data will always remain under the threat of t...The exponential pace of the spread of the digital world has served as one of the assisting forces to generate an enormous amount of informationflow-ing over the network.The data will always remain under the threat of technolo-gical suffering where intruders and hackers consistently try to breach the security systems by gaining personal information insights.In this paper,the authors pro-posed the HDTbNB(Hybrid Decision Tree-based Naïve Bayes)algorithm tofind the essential features without data scaling to maximize the model’s performance by reducing the false alarm rate and training period to reduce zero frequency with enhanced accuracy of IDS(Intrusion Detection System)and to further analyze the performance execution of distinct machine learning algorithms as Naïve Bayes,Decision Tree,K-Nearest Neighbors and Logistic Regression over KDD 99 data-set.The performance of algorithm is evaluated by making a comparative analysis of computed parameters as accuracy,macro average,and weighted average.Thefindings were concluded as a percentage increase in accuracy,precision,sensitiv-ity,specificity,and a decrease in misclassification as 9.3%,6.4%,12.5%,5.2%and 81%.展开更多
In recent years,machine learning(ML)and deep learning(DL)have significantly advanced intrusion detection systems,effectively addressing potential malicious attacks across networks.This paper introduces a robust method...In recent years,machine learning(ML)and deep learning(DL)have significantly advanced intrusion detection systems,effectively addressing potential malicious attacks across networks.This paper introduces a robust method for detecting and categorizing attacks within the Internet of Things(IoT)environment,leveraging the NSL-KDD dataset.To achieve high accuracy,the authors used the feature extraction technique in combination with an autoencoder,integrated with a gated recurrent unit(GRU).Therefore,the accurate features are selected by using the cuckoo search algorithm integrated particle swarm optimization(PSO),and PSO has been employed for training the features.The final classification of features has been carried out by using the proposed RF-GNB random forest with the Gaussian Naïve Bayes classifier.The proposed model has been evaluated and its performance is verified with some of the standard metrics such as precision,accuracy rate,recall F1-score,etc.,and has been compared with different existing models.The generated results that detected approximately 99.87%of intrusions within the IoT environments,demonstrated the high performance of the proposed method.These results affirmed the efficacy of the proposed method in increasing the accuracy of intrusion detection within IoT network systems.展开更多
The Washington,DC crash statistic report for the period from 2013 to 2015 shows that the city recorded about 41789 crashes at unsignalized intersections,which resulted in 14168 injuries and 51 fatalities.The economic ...The Washington,DC crash statistic report for the period from 2013 to 2015 shows that the city recorded about 41789 crashes at unsignalized intersections,which resulted in 14168 injuries and 51 fatalities.The economic cost of these fatalities has been estimated to be in the millions of dollars.It is therefore necessary to investigate the predictability of the occurrence of theses crashes,based on pertinent factors,in order to provide mitigating measures.This research focused on the development of models to predict the injury severity of crashes using support vector machines(SVMs)and Gaussian naïve Bayes classifiers(GNBCs).The models were developed based on 3307 crashes that occurred from 2008 to 2015.Eight SVM models and a GNBC model were developed.The most accurate model was the SVM with a radial basis kernel function.This model predicted the severity of an injury sustained in a crash with an accuracy of approximately 83.2%.The GNBC produced the worst-performing model with an accuracy of 48.5%.These models will enable transport officials to identify crash-prone unsignalized intersections to provide the necessary countermeasures beforehand.展开更多
Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automa...Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automated diagnosis in TCM.We analyze the reasoning path patterns from symptom to syndromes on the knowledge graph.There are two kinds of path patterns in the knowledge graph:one-hop and two-hop.The one-hop path pattern maps the symptom to syndromes immediately.The two-hop path pattern maps the symptom to syndromes through the nature of disease,etiology,and pathomechanism to support the diagnostic reasoning.Considering the different support strengths for the knowledge paths in reasoning,we design a dynamic weight mechanism.We utilize Naïve Bayes and TF-IDF to implement the reasoning method and the weighted score calculation.The proposed method reasons the syndrome results by calculating the possibility according to the weighted score of the path in the knowledge graph based on the reasoning path patterns.We evaluate the method with clinical records and clinical practice in hospitals.The preliminary results suggest that the method achieves high performance and can help TCM doctors make better diagnosis decisions in practice.Meanwhile,the method is robust and explainable under the guide of the knowledge graph.It could help TCM physicians,especially primary physicians in rural areas,and provide clinical decision support in clinical practice.展开更多
文摘<span style="font-family:Verdana;">The presence of bearing faults reduces the efficiency of rotating machines and thus increases energy consumption or even the total stoppage of the machine. </span><span style="font-family:Verdana;">It becomes essential to correctly diagnose the fault caused by the bearing.</span><span style="font-family:Verdana;"> Hence the importance of determining an effective features extraction method that best describes the fault. The vision of this paper is to merge the features selection methods in order to define the most relevant featuresin the texture </span><span style="font-family:Verdana;">of the vibration signal images. In this study, the Gray Level Co-occurrence </span><span style="font-family:Verdana;">Matrix (GLCM) in texture analysis is applied on the vibration signal represented in images. Features</span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">selection based on the merge of PCA (Principal component Analysis) method and SFE (Sequential Features Extraction) method is </span><span style="font-family:Verdana;">done to obtain the most relevant features. The multiclass-Na<span style="white-space:nowrap;">?</span>ve Bayesclassifi</span><span style="font-family:Verdana;">er is used to test the proposed approach. The success rate of this classification is 98.27%. The relevant features obtained give promising results and are more efficient than the methods observed in the literature.</span></span></span></span>
文摘This paper proposed an improved Naïve Bayes Classifier for sentimental analysis from a large-scale dataset such as in YouTube.YouTube contains large unstructured and unorganized comments and reactions,which carry important information.Organizing large amounts of data and extracting useful information is a challenging task.The extracted information can be considered as new knowledge and can be used for deci sion-making.We extract comments from YouTube on videos and categorized them in domain-specific,and then apply the Naïve Bayes classifier with improved techniques.Our method provided a decent 80%accuracy in classifying those comments.This experiment shows that the proposed method provides excellent adaptability for large-scale text classification.
文摘As the importance of email increases,the amount of malicious email is also increasing,so the need for malicious email filtering is growing.Since it is more economical to combine commodity hardware consisting of a medium server or PC with a virtual environment to use as a single server resource and filter malicious email using machine learning techniques,we used a Hadoop MapReduce framework and Naïve Bayes among machine learning methods for malicious email filtering.Naïve Bayes was selected because it is one of the top machine learning methods(Support Vector Machine(SVM),Naïve Bayes,K-Nearest Neighbor(KNN),and Decision Tree)in terms of execution time and accuracy.Malicious email was filtered with MapReduce programming using the Naïve Bayes technique,which is a supervised machine learning method,in a Hadoop framework with optimized performance and also with the Python program technique with the Naïve Bayes technique applied in a bare metal server environment with the Hadoop environment not applied.According to the results of a comparison of the accuracy and predictive error rates of the two methods,the Hadoop MapReduce Naïve Bayes method improved the accuracy of spam and ham email identification 1.11 times and the prediction error rate 14.13 times compared to the non-Hadoop Python Naïve Bayes method.
文摘The naïve Bayes classifier is one of the commonly used data mining methods for classification.Despite its simplicity,naïve Bayes is effective and computationally efficient.Although the strong attribute independence assumption in the naïve Bayes classifier makes it a tractable method for learning,this assumption may not hold in real-world applications.Many enhancements to the basic algorithm have been proposed in order to alleviate the violation of attribute independence assumption.While these methods improve the classification performance,they do not necessarily retain the mathematical structure of the naïve Bayes model and some at the expense of computational time.One approach to reduce the naïvetéof the classifier is to incorporate attribute weights in the conditional probability.In this paper,we proposed a method to incorporate attribute weights to naïve Bayes.To evaluate the performance of our method,we used the public benchmark datasets.We compared our method with the standard naïve Bayes and baseline attribute weighting methods.Experimental results show that our method to incorporate attribute weights improves the classification performance compared to both standard naïve Bayes and baseline attribute weighting methods in terms of classification accuracy and F1,especially when the independence assumption is strongly violated,which was validated using the Chi-square test of independence.
文摘Classification model has received great attention in any domain of research and also a reliable tool for medical disease diagnosis. The domain of classification model is used in disease diagnosis, disease prediction, bio informatics, crime prediction and so on. However, an efficient disease diagnosis model was compromised the disease prediction. In this paper, a Rough Set Rule-based Multitude Classifier (RS-RMC) is developed to improve the disease prediction rate and enhance the class accuracy of disease being diagnosed. The RS-RMC involves two steps. Initially, a Rough Set model is used for Feature Selection aiming at minimizing the execution time for obtaining the disease feature set. A Multitude Classifier model is presented in second step for detection of heart disease and for efficient classification. The Na?ve Bayes Classifier algorithm is designed for efficient identification of classes to measure the relationship between disease features and improving disease prediction rate. Experimental analysis shows that RS-RMC is used to reduce the execution time for extracting the disease feature with minimum false positive rate compared to the state-of-the-art works.
文摘The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that only fresh and high-quality fruits are sold to consumers.The impact of rotten fruits can foster harmful bacteria,molds and other microorganisms that can cause food poisoning and other illnesses to the consumers.The overall purpose of the study is to classify rotten fruits,which can affect the taste,texture,and appearance of other fresh fruits,thereby reducing their shelf life.The agriculture and food industries are increasingly adopting computer vision technology to detect rotten fruits and forecast their shelf life.Hence,this research work mainly focuses on the Convolutional Neural Network’s(CNN)deep learning model,which helps in the classification of rotten fruits.The proposed methodology involves real-time analysis of a dataset of various types of fruits,including apples,bananas,oranges,papayas and guavas.Similarly,machine learningmodels such as GaussianNaïve Bayes(GNB)and random forest are used to predict the fruit’s shelf life.The results obtained from the various pre-trained models for rotten fruit detection are analysed based on an accuracy score to determine the best model.In comparison to other pre-trained models,the visual geometry group16(VGG16)obtained a higher accuracy score of 95%.Likewise,the random forest model delivers a better accuracy score of 88% when compared with GNB in forecasting the fruit’s shelf life.By developing an accurate classification model,only fresh and safe fruits reach consumers,reducing the risks associated with contaminated produce.Thereby,the proposed approach will have a significant impact on the food industry for efficient fruit distribution and also benefit customers to purchase fresh fruits.
文摘Intrusion detection is the investigation process of information about the system activities or its data to detect any malicious behavior or unauthorized activity.Most of the IDS implement K-means clustering technique due to its linear complexity and fast computing ability.Nonetheless,it is Naïve use of the mean data value for the cluster core that presents a major drawback.The chances of two circular clusters having different radius and centering at the same mean will occur.This condition cannot be addressed by the K-means algorithm because the mean value of the various clusters is very similar together.However,if the clusters are not spherical,it fails.To overcome this issue,a new integrated hybrid model by integrating expectation maximizing(EM)clustering using a Gaussian mixture model(GMM)and naïve Bays classifier have been proposed.In this model,GMM give more flexibility than K-Means in terms of cluster covariance.Also,they use probabilities function and soft clustering,that’s why they can have multiple cluster for a single data.In GMM,we can define the cluster form in GMM by two parameters:the mean and the standard deviation.This means that by using these two parameters,the cluster can take any kind of elliptical shape.EM-GMM will be used to cluster data based on data activity into the corresponding category.
文摘The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a major issue in the city,municipalities in the aspects of the environment,and logistics.An efficient supply chain with edge computing technology is used in managing medical waste.The supply chain operations include processing of waste collec-tion,transportation,and disposal of waste.Many research works have been applied to improve the management of wastage.The main issues in the existing techniques are ineffective and expensive and centralized edge computing which leads to failure in providing security,trustworthiness,and transparency.To over-come these issues,in this paper we implement an efficient Naive Bayes classifier algorithm and Q-Learning algorithm in decentralized edge computing technology with a binary bat optimization algorithm(NBQ-BBOA).This proposed work is used to track,detect,and manage medical waste.To minimize the transferring cost of medical wastage from various nodes,the Q-Learning algorithm is used.The accuracy obtained for the Naïve Bayes algorithm is 88%,the Q-Learning algo-rithm is 82%and NBQ-BBOA is 98%.The error rate of Root Mean Square Error(RMSE)and Mean Error(MAE)for the proposed work NBQ-BBOA are 0.012 and 0.045.
文摘Machine learning algorithms (MLs) can potentially improve disease diagnostics, leading to early detection and treatment of these diseases. As a malignant tumor whose primary focus is located in the bronchial mucosal epithelium, lung cancer has the highest mortality and morbidity among cancer types, threatening health and life of patients suffering from the disease. Machine learning algorithms such as Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Naïve Bayes (NB) have been used for lung cancer prediction. However they still face challenges such as high dimensionality of the feature space, over-fitting, high computational complexity, noise and missing data, low accuracies, low precision and high error rates. Ensemble learning, which combines classifiers, may be helpful to boost prediction on new data. However, current ensemble ML techniques rarely consider comprehensive evaluation metrics to evaluate the performance of individual classifiers. The main purpose of this study was to develop an ensemble classifier that improves lung cancer prediction. An ensemble machine learning algorithm is developed based on RF, SVM, NB, and KNN. Feature selection is done based on Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). This algorithm is then executed on lung cancer data and evaluated using execution time, true positives (TP), true negatives (TN), false positives (FP), false negatives (FN), false positive rate (FPR), recall (R), precision (P) and F-measure (FM). Experimental results show that the proposed ensemble classifier has the best classification of 0.9825% with the lowest error rate of 0.0193. This is followed by SVM in which the probability of having the best classification is 0.9652% at an error rate of 0.0206. On the other hand, NB had the worst performance of 0.8475% classification at 0.0738 error rate.
基金This work is supported by the KIAS(Research Number:CG076601)and in part by Sejong University Faculty Research Fund.
文摘Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for writing.The communication using the Roman characters,which are used in the script of Urdu language on social media,is now considered the most typical standard of communication in an Indian landmass that makes it an expensive information supply.English Text classification is a solved problem but there have been only a few efforts to examine the rich information supply of Roman Urdu in the past.This is due to the numerous complexities involved in the processing of Roman Urdu data.The complexities associated with Roman Urdu include the non-availability of the tagged corpus,lack of a set of rules,and lack of standardized spellings.A large amount of Roman Urdu news data is available on mainstream news websites and social media websites like Facebook,Twitter but meaningful information can only be extracted if data is in a structured format.We have developed a Roman Urdu news headline classifier,which will help to classify news into relevant categories on which further analysis and modeling can be done.The author of this research aims to develop the Roman Urdu news classifier,which will classify the news into five categories(health,business,technology,sports,international).First,we will develop the news dataset using scraping tools and then after preprocessing,we will compare the results of different machine learning algorithms like Logistic Regression(LR),Multinomial Naïve Bayes(MNB),Long short term memory(LSTM),and Convolutional Neural Network(CNN).After this,we will use a phonetic algorithm to control lexical variation and test news from different websites.The preliminary results suggest that a more accurate classification can be accomplished by monitoring noise inside data and by classifying the news.After applying above mentioned different machine learning algorithms,results have shown that Multinomial Naïve Bayes classifier is giving the best accuracy of 90.17%which is due to the noise lexical variation.
文摘Most human deaths are caused by heart diseases.Such diseases cannot be efficiently detected for the lack of specialized knowledge and experience.Data science is important in healthcare sector for the role it plays in bulk data processing.Machine learning(ML)also plays a significant part in disease prediction and decision⁃making in medical care industry.This study reviews and evaluates the ML approaches applied in heart disease detection.The primary goal is to find mathematically effective ML algorithm to predict heart diseases more accurately.Various ML approaches including Logistic Regression,Support Vector Machine(SVM),k⁃Nearest Neighbor(k⁃NN),t⁃Distributed Stochastic Neighbor Embedding(t⁃SNE),Naïve Bayes,and Random Forest were utilized to process heart disease dataset and extract the unknown patterns of heart disease detection.An analysis was conducted on their performance to examine the effecacy and efficiency.The results show that Random Forest out⁃performed other ML algorithms with an accuracy of 97%.
文摘Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims and society as a whole.It results from a misunderstanding regarding freedom of speech.In this work,we proposed a methodology for detecting such behaviors(bullying,harassment,and hate-related texts)using supervised machine learning algo-rithms(SVM,Naïve Bayes,Logistic regression,and random forest)and for predicting a topic associated with these text data using unsupervised natural language processing,such as latent Dirichlet allocation.In addition,we used accuracy,precision,recall,and F1 score to assess prior classifiers.Results show that the use of logistic regression,support vector machine,random forest model,and Naïve Bayes has 95%,94.97%,94.66%,and 93.1%accuracy,respectively.
文摘The exponential pace of the spread of the digital world has served as one of the assisting forces to generate an enormous amount of informationflow-ing over the network.The data will always remain under the threat of technolo-gical suffering where intruders and hackers consistently try to breach the security systems by gaining personal information insights.In this paper,the authors pro-posed the HDTbNB(Hybrid Decision Tree-based Naïve Bayes)algorithm tofind the essential features without data scaling to maximize the model’s performance by reducing the false alarm rate and training period to reduce zero frequency with enhanced accuracy of IDS(Intrusion Detection System)and to further analyze the performance execution of distinct machine learning algorithms as Naïve Bayes,Decision Tree,K-Nearest Neighbors and Logistic Regression over KDD 99 data-set.The performance of algorithm is evaluated by making a comparative analysis of computed parameters as accuracy,macro average,and weighted average.Thefindings were concluded as a percentage increase in accuracy,precision,sensitiv-ity,specificity,and a decrease in misclassification as 9.3%,6.4%,12.5%,5.2%and 81%.
基金the Deanship of Scientific Research at Shaqra University for funding this research work through the project number(SU-ANN-2023051).
文摘In recent years,machine learning(ML)and deep learning(DL)have significantly advanced intrusion detection systems,effectively addressing potential malicious attacks across networks.This paper introduces a robust method for detecting and categorizing attacks within the Internet of Things(IoT)environment,leveraging the NSL-KDD dataset.To achieve high accuracy,the authors used the feature extraction technique in combination with an autoencoder,integrated with a gated recurrent unit(GRU).Therefore,the accurate features are selected by using the cuckoo search algorithm integrated particle swarm optimization(PSO),and PSO has been employed for training the features.The final classification of features has been carried out by using the proposed RF-GNB random forest with the Gaussian Naïve Bayes classifier.The proposed model has been evaluated and its performance is verified with some of the standard metrics such as precision,accuracy rate,recall F1-score,etc.,and has been compared with different existing models.The generated results that detected approximately 99.87%of intrusions within the IoT environments,demonstrated the high performance of the proposed method.These results affirmed the efficacy of the proposed method in increasing the accuracy of intrusion detection within IoT network systems.
文摘The Washington,DC crash statistic report for the period from 2013 to 2015 shows that the city recorded about 41789 crashes at unsignalized intersections,which resulted in 14168 injuries and 51 fatalities.The economic cost of these fatalities has been estimated to be in the millions of dollars.It is therefore necessary to investigate the predictability of the occurrence of theses crashes,based on pertinent factors,in order to provide mitigating measures.This research focused on the development of models to predict the injury severity of crashes using support vector machines(SVMs)and Gaussian naïve Bayes classifiers(GNBCs).The models were developed based on 3307 crashes that occurred from 2008 to 2015.Eight SVM models and a GNBC model were developed.The most accurate model was the SVM with a radial basis kernel function.This model predicted the severity of an injury sustained in a crash with an accuracy of approximately 83.2%.The GNBC produced the worst-performing model with an accuracy of 48.5%.These models will enable transport officials to identify crash-prone unsignalized intersections to provide the necessary countermeasures beforehand.
基金This work is supported by the National Key Research and Development Program of China under Grant 2017YFB1002304the China Scholarship Council under Grant 201906465021.
文摘Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automated diagnosis in TCM.We analyze the reasoning path patterns from symptom to syndromes on the knowledge graph.There are two kinds of path patterns in the knowledge graph:one-hop and two-hop.The one-hop path pattern maps the symptom to syndromes immediately.The two-hop path pattern maps the symptom to syndromes through the nature of disease,etiology,and pathomechanism to support the diagnostic reasoning.Considering the different support strengths for the knowledge paths in reasoning,we design a dynamic weight mechanism.We utilize Naïve Bayes and TF-IDF to implement the reasoning method and the weighted score calculation.The proposed method reasons the syndrome results by calculating the possibility according to the weighted score of the path in the knowledge graph based on the reasoning path patterns.We evaluate the method with clinical records and clinical practice in hospitals.The preliminary results suggest that the method achieves high performance and can help TCM doctors make better diagnosis decisions in practice.Meanwhile,the method is robust and explainable under the guide of the knowledge graph.It could help TCM physicians,especially primary physicians in rural areas,and provide clinical decision support in clinical practice.