Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learni...Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.展开更多
Diabetes mellitus,generally known as diabetes,is one of the most common diseases worldwide.It is a metabolic disease characterized by insulin deciency,or glucose(blood sugar)levels that exceed 200 mg/dL(11.1 ml/L)for ...Diabetes mellitus,generally known as diabetes,is one of the most common diseases worldwide.It is a metabolic disease characterized by insulin deciency,or glucose(blood sugar)levels that exceed 200 mg/dL(11.1 ml/L)for prolonged periods,and may lead to death if left uncontrolled by medication or insulin injections.Diabetes is categorized into two main types—type 1 and type 2—both of which feature glucose levels above“normal,”dened as 140 mg/dL.Diabetes is triggered by malfunction of the pancreas,which releases insulin,a natural hormone responsible for controlling glucose levels in blood cells.Diagnosis and comprehensive analysis of this potentially fatal disease necessitate application of techniques with minimal rates of error.The primary purpose of this research study is to assess the potential role of machine learning in predicting a person’s risk of developing diabetes.Historically,research has supported the use of various machine algorithms,such as naïve Bayes,decision trees,and articial neural networks,for early diagnosis of diabetes.However,to achieve maximum accuracy and minimal error in diagnostic predictions,there remains an immense need for further research and innovation to improve the machine-learning tools and techniques available to healthcare professionals.Therefore,in this paper,we propose a novel cloud-based machine-learning fusion technique involving synthesis of three machine algorithms and use of fuzzy systems for collective generation of highly accurate nal decisions regarding early diagnosis of diabetes.展开更多
The rapid advancement of wireless communication is forming a hyper-connected 5G network in which billions of linked devices generate massive amounts of data.The traffic control and data forwarding functions are decoup...The rapid advancement of wireless communication is forming a hyper-connected 5G network in which billions of linked devices generate massive amounts of data.The traffic control and data forwarding functions are decoupled in software-defined networking(SDN)and allow the network to be programmable.Each switch in SDN keeps track of forwarding information in a flow table.The SDN switches must search the flow table for the flow rules that match the packets to handle the incoming packets.Due to the obvious vast quantity of data in data centres,the capacity of the flow table restricts the data plane’s forwarding capabilities.So,the SDN must handle traffic from across the whole network.The flow table depends on Ternary Content Addressable Memorable Memory(TCAM)for storing and a quick search of regulations;it is restricted in capacity owing to its elevated cost and energy consumption.Whenever the flow table is abused and overflowing,the usual regulations cannot be executed quickly.In this case,we consider lowrate flow table overflowing that causes collision flow rules to be installed and consumes excessive existing flow table capacity by delivering packets that don’t fit the flow table at a low rate.This study introduces machine learning techniques for detecting and categorizing low-rate collision flows table in SDN,using Feed ForwardNeuralNetwork(FFNN),K-Means,and Decision Tree(DT).We generate two network topologies,Fat Tree and Simple Tree Topologies,with the Mininet simulator and coupled to the OpenDayLight(ODL)controller.The efficiency and efficacy of the suggested algorithms are assessed using several assessment indicators such as success rate query,propagation delay,overall dropped packets,energy consumption,bandwidth usage,latency rate,and throughput.The findings showed that the suggested technique to tackle the flow table congestion problem minimizes the number of flows while retaining the statistical consistency of the 5G network.By putting the proposed flow method and checking whether a packet may move from point A to point B without breaking certain regulations,the evaluation tool examines every flow against a set of criteria.The FFNN with DT and K-means algorithms obtain accuracies of 96.29%and 97.51%,respectively,in the identification of collision flows,according to the experimental outcome when associated with existing methods from the literature.展开更多
Software-defined network(SDN)becomes a new revolutionary paradigm in networks because it provides more control and network operation over a network infrastructure.The SDN controller is considered as the operating syst...Software-defined network(SDN)becomes a new revolutionary paradigm in networks because it provides more control and network operation over a network infrastructure.The SDN controller is considered as the operating system of the SDN based network infrastructure,and it is responsible for executing the different network applications and maintaining the network services and functionalities.Despite all its tremendous capabilities,the SDN face many security issues due to the complexity of the SDN architecture.Distributed denial of services(DDoS)is a common attack on SDN due to its centralized architecture,especially at the control layer of the SDN that has a network-wide impact.Machine learning is now widely used for fast detection of these attacks.In this paper,some important feature selection methods for machine learning on DDoS detection are evaluated.The selection of optimal features reflects the classification accuracy of the machine learning techniques and the performance of the SDN controller.A comparative analysis of feature selection and machine learning classifiers is also derived to detect SDN attacks.The experimental results show that the Random forest(RF)classifier trains the more accurate model with 99.97%accuracy using features subset by the Recursive feature elimination(RFE)method.展开更多
Studies on ballistic penetration to laminates is complicated,but important for design effective protection of structures.Experimental means of study is expensive and can often be dangerous.Numerical simulation has bee...Studies on ballistic penetration to laminates is complicated,but important for design effective protection of structures.Experimental means of study is expensive and can often be dangerous.Numerical simulation has been an excellent supplement,but the computation is time-consuming.Main aim of this thesis was to develop and test an effective tool for real-time prediction of projectile penetrations to laminates by training a neural network and a decision tree regression model.A large number of finite element models were developed;the residual velocities of projectiles from finite element simulations were used as the target data and processed to produce sufficient number of training samples.Study focused on steel 4340tpolyurea laminates with various configurations.Four different 3D shapes of the projectiles were modeled and used in the training.The trained neural network and decision tree model was tested using independently generated test samples using finite element models.The predicted projectile velocity values using the trained machine learning models are then compared with the finite element simulation to verify the effectiveness of the models.Additionally,both models were trained using a published experimental data of projectile impacts to predict residual velocity of projectiles for the unseen samples.Performance of both the models was evaluated and compared.Models trained with Finite element simulation data samples were found capable to give more accurate predication,compared to the models trained with experimental data,because finite element modeling can generate much larger training set,and thus finite element solvers can serve as an excellent teacher.This study also showed that neural network model performs better with small experimental dataset compared to decision tree regression model.展开更多
In defense-in-depth,humans have always been the weakest link in cybersecurity.However,unlike common threats,social engineering poses vulnerabilities not directly quantifiable in penetration testing.Most skilled social...In defense-in-depth,humans have always been the weakest link in cybersecurity.However,unlike common threats,social engineering poses vulnerabilities not directly quantifiable in penetration testing.Most skilled social engineers trick users into giving up information voluntarily through attacks like phishing and adware.Social Engineering(SE)in social media is structurally similar to regular posts but contains malicious intrinsic meaning within the sentence semantic.In this paper,a novel SE model is trained using a Recurrent Neural Network Long Short Term Memory(RNN-LSTM)to identify well-disguised SE threats in social media posts.We use a custom dataset crawled from hundreds of corporate and personal Facebook posts.First,the social engineering attack detection pipeline(SEAD)is designed to filter out social posts with malicious intents using domain heuristics.Next,each social media post is tokenized into sentences and then analyzed with a sentiment analyzer before being labelled as an anomaly or normal training data.Then,we train an RNN-LSTM model to detect five types of social engineering attacks that potentially contain signs of information gathering.The experimental result showed that the Social Engineering Attack(SEA)model achieves 0.84 in classification precision and 0.81 in recall compared to the ground truth labeled by network experts.The experimental results showed that the semantics and linguistics similarities are an effective indicator for early detection of SEA.展开更多
基金Taif University Researchers Supporting Project Number(TURSP-2020/73)Taif University,Taif,Saudi Arabia.
文摘Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.
文摘Diabetes mellitus,generally known as diabetes,is one of the most common diseases worldwide.It is a metabolic disease characterized by insulin deciency,or glucose(blood sugar)levels that exceed 200 mg/dL(11.1 ml/L)for prolonged periods,and may lead to death if left uncontrolled by medication or insulin injections.Diabetes is categorized into two main types—type 1 and type 2—both of which feature glucose levels above“normal,”dened as 140 mg/dL.Diabetes is triggered by malfunction of the pancreas,which releases insulin,a natural hormone responsible for controlling glucose levels in blood cells.Diagnosis and comprehensive analysis of this potentially fatal disease necessitate application of techniques with minimal rates of error.The primary purpose of this research study is to assess the potential role of machine learning in predicting a person’s risk of developing diabetes.Historically,research has supported the use of various machine algorithms,such as naïve Bayes,decision trees,and articial neural networks,for early diagnosis of diabetes.However,to achieve maximum accuracy and minimal error in diagnostic predictions,there remains an immense need for further research and innovation to improve the machine-learning tools and techniques available to healthcare professionals.Therefore,in this paper,we propose a novel cloud-based machine-learning fusion technique involving synthesis of three machine algorithms and use of fuzzy systems for collective generation of highly accurate nal decisions regarding early diagnosis of diabetes.
基金Taif University Researchers supporting Project number(TURSP-2020/215),Taif University,Taif,Saudi Arabia.
文摘The rapid advancement of wireless communication is forming a hyper-connected 5G network in which billions of linked devices generate massive amounts of data.The traffic control and data forwarding functions are decoupled in software-defined networking(SDN)and allow the network to be programmable.Each switch in SDN keeps track of forwarding information in a flow table.The SDN switches must search the flow table for the flow rules that match the packets to handle the incoming packets.Due to the obvious vast quantity of data in data centres,the capacity of the flow table restricts the data plane’s forwarding capabilities.So,the SDN must handle traffic from across the whole network.The flow table depends on Ternary Content Addressable Memorable Memory(TCAM)for storing and a quick search of regulations;it is restricted in capacity owing to its elevated cost and energy consumption.Whenever the flow table is abused and overflowing,the usual regulations cannot be executed quickly.In this case,we consider lowrate flow table overflowing that causes collision flow rules to be installed and consumes excessive existing flow table capacity by delivering packets that don’t fit the flow table at a low rate.This study introduces machine learning techniques for detecting and categorizing low-rate collision flows table in SDN,using Feed ForwardNeuralNetwork(FFNN),K-Means,and Decision Tree(DT).We generate two network topologies,Fat Tree and Simple Tree Topologies,with the Mininet simulator and coupled to the OpenDayLight(ODL)controller.The efficiency and efficacy of the suggested algorithms are assessed using several assessment indicators such as success rate query,propagation delay,overall dropped packets,energy consumption,bandwidth usage,latency rate,and throughput.The findings showed that the suggested technique to tackle the flow table congestion problem minimizes the number of flows while retaining the statistical consistency of the 5G network.By putting the proposed flow method and checking whether a packet may move from point A to point B without breaking certain regulations,the evaluation tool examines every flow against a set of criteria.The FFNN with DT and K-means algorithms obtain accuracies of 96.29%and 97.51%,respectively,in the identification of collision flows,according to the experimental outcome when associated with existing methods from the literature.
文摘Software-defined network(SDN)becomes a new revolutionary paradigm in networks because it provides more control and network operation over a network infrastructure.The SDN controller is considered as the operating system of the SDN based network infrastructure,and it is responsible for executing the different network applications and maintaining the network services and functionalities.Despite all its tremendous capabilities,the SDN face many security issues due to the complexity of the SDN architecture.Distributed denial of services(DDoS)is a common attack on SDN due to its centralized architecture,especially at the control layer of the SDN that has a network-wide impact.Machine learning is now widely used for fast detection of these attacks.In this paper,some important feature selection methods for machine learning on DDoS detection are evaluated.The selection of optimal features reflects the classification accuracy of the machine learning techniques and the performance of the SDN controller.A comparative analysis of feature selection and machine learning classifiers is also derived to detect SDN attacks.The experimental results show that the Random forest(RF)classifier trains the more accurate model with 99.97%accuracy using features subset by the Recursive feature elimination(RFE)method.
文摘Studies on ballistic penetration to laminates is complicated,but important for design effective protection of structures.Experimental means of study is expensive and can often be dangerous.Numerical simulation has been an excellent supplement,but the computation is time-consuming.Main aim of this thesis was to develop and test an effective tool for real-time prediction of projectile penetrations to laminates by training a neural network and a decision tree regression model.A large number of finite element models were developed;the residual velocities of projectiles from finite element simulations were used as the target data and processed to produce sufficient number of training samples.Study focused on steel 4340tpolyurea laminates with various configurations.Four different 3D shapes of the projectiles were modeled and used in the training.The trained neural network and decision tree model was tested using independently generated test samples using finite element models.The predicted projectile velocity values using the trained machine learning models are then compared with the finite element simulation to verify the effectiveness of the models.Additionally,both models were trained using a published experimental data of projectile impacts to predict residual velocity of projectiles for the unseen samples.Performance of both the models was evaluated and compared.Models trained with Finite element simulation data samples were found capable to give more accurate predication,compared to the models trained with experimental data,because finite element modeling can generate much larger training set,and thus finite element solvers can serve as an excellent teacher.This study also showed that neural network model performs better with small experimental dataset compared to decision tree regression model.
基金The authors acknowledge the funding support ofFRGS/1/2021/ICT07/UTAR/02/3 and IPSR/RMC/UTARRF/2020-C2/G01 for this study.
文摘In defense-in-depth,humans have always been the weakest link in cybersecurity.However,unlike common threats,social engineering poses vulnerabilities not directly quantifiable in penetration testing.Most skilled social engineers trick users into giving up information voluntarily through attacks like phishing and adware.Social Engineering(SE)in social media is structurally similar to regular posts but contains malicious intrinsic meaning within the sentence semantic.In this paper,a novel SE model is trained using a Recurrent Neural Network Long Short Term Memory(RNN-LSTM)to identify well-disguised SE threats in social media posts.We use a custom dataset crawled from hundreds of corporate and personal Facebook posts.First,the social engineering attack detection pipeline(SEAD)is designed to filter out social posts with malicious intents using domain heuristics.Next,each social media post is tokenized into sentences and then analyzed with a sentiment analyzer before being labelled as an anomaly or normal training data.Then,we train an RNN-LSTM model to detect five types of social engineering attacks that potentially contain signs of information gathering.The experimental result showed that the Social Engineering Attack(SEA)model achieves 0.84 in classification precision and 0.81 in recall compared to the ground truth labeled by network experts.The experimental results showed that the semantics and linguistics similarities are an effective indicator for early detection of SEA.