期刊文献+
共找到172,774篇文章
< 1 2 250 >
每页显示 20 50 100
Feature extraction for machine learning-based intrusion detection in IoT networks
1
作者 Mohanad Sarhan Siamak Layeghy +2 位作者 Nour Moustafa Marcus Gallagher Marius Portmann 《Digital Communications and Networks》 SCIE CSCD 2024年第1期205-216,共12页
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ... A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field. 展开更多
关键词 Feature extraction machine learning Network intrusion detection system IOT
下载PDF
SNR and RSSI Based an Optimized Machine Learning Based Indoor Localization Approach:Multistory Round Building Scenario over LoRa Network
2
作者 Muhammad Ayoub Kamal Muhammad Mansoor Alam +1 位作者 Aznida Abu Bakar Sajak Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第8期1927-1945,共19页
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ... In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity. 展开更多
关键词 Indoor localization MKNN LoRa machine learning classification RSSI SNR localization
下载PDF
A Hybrid Machine Learning Approach for Improvised QoE in Video Services over 5G Wireless Networks
3
作者 K.B.Ajeyprasaath P.Vetrivelan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3195-3213,共19页
Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications indu... Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy. 展开更多
关键词 Hybrid XGBStackQoE-model machine learning MOS performance metrics QOE 5G video services
下载PDF
Label Recovery and Trajectory Designable Network for Transfer Fault Diagnosis of Machines With Incorrect Annotation
4
作者 Bin Yang Yaguo Lei +2 位作者 Xiang Li Naipeng Li Asoke K.Nandi 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期932-945,共14页
The success of deep transfer learning in fault diagnosis is attributed to the collection of high-quality labeled data from the source domain.However,in engineering scenarios,achieving such high-quality label annotatio... The success of deep transfer learning in fault diagnosis is attributed to the collection of high-quality labeled data from the source domain.However,in engineering scenarios,achieving such high-quality label annotation is difficult and expensive.The incorrect label annotation produces two negative effects:1)the complex decision boundary of diagnosis models lowers the generalization performance on the target domain,and2)the distribution of target domain samples becomes misaligned with the false-labeled samples.To overcome these negative effects,this article proposes a solution called the label recovery and trajectory designable network(LRTDN).LRTDN consists of three parts.First,a residual network with dual classifiers is to learn features from cross-domain samples.Second,an annotation check module is constructed to generate a label anomaly indicator that could modify the abnormal labels of false-labeled samples in the source domain.With the training of relabeled samples,the complexity of diagnosis model is reduced via semi-supervised learning.Third,the adaptation trajectories are designed for sample distributions across domains.This ensures that the target domain samples are only adapted with the pure-labeled samples.The LRTDN is verified by two case studies,in which the diagnosis knowledge of bearings is transferred across different working conditions as well as different yet related machines.The results show that LRTDN offers a high diagnosis accuracy even in the presence of incorrect annotation. 展开更多
关键词 Deep transfer learning domain adaptation incorrect label annotation intelligent fault diagnosis rotating machines
下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
5
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 Automated machine learning autoregressive integrated moving average neural networks time series analysis
下载PDF
Predictive value of machine learning models for lymph node metastasis in gastric cancer: A two-center study
6
作者 Tong Lu Miao Lu +4 位作者 Dong Wu Yuan-Yuan Ding Hao-Nan Liu Tao-Tao Li Da-Qing Song 《World Journal of Gastrointestinal Surgery》 SCIE 2024年第1期85-94,共10页
BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong t... BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong to nodule type and peripheral type,the application of imaging diagnosis is restricted.AIM To establish models for predicting the risk of lymph node metastasis in gastric cancer patients using machine learning(ML)algorithms and to evaluate their pre-dictive performance in clinical practice.METHODS Data of a total of 369 patients who underwent radical gastrectomy at the Depart-ment of General Surgery of Affiliated Hospital of Xuzhou Medical University(Xuzhou,China)from March 2016 to November 2019 were collected and retro-spectively analyzed as the training group.In addition,data of 123 patients who underwent radical gastrectomy at the Department of General Surgery of Jining First People’s Hospital(Jining,China)were collected and analyzed as the verifi-cation group.Seven ML models,including decision tree,random forest,support vector machine(SVM),gradient boosting machine,naive Bayes,neural network,and logistic regression,were developed to evaluate the occurrence of lymph node metastasis in patients with gastric cancer.The ML models were established fo-llowing ten cross-validation iterations using the training dataset,and subsequently,each model was assessed using the test dataset.The models’performance was evaluated by comparing the area under the receiver operating characteristic curve of each model.RESULTS Among the seven ML models,except for SVM,the other ones exhibited higher accuracy and reliability,and the influences of various risk factors on the models are intuitive.CONCLUSION The ML models developed exhibit strong predictive capabilities for lymph node metastasis in gastric cancer,which can aid in personalized clinical diagnosis and treatment. 展开更多
关键词 machine learning Prediction model Gastric cancer Lymph node metastasis
下载PDF
Least Squares One-Class Support Tensor Machine
7
作者 Kaiwen Zhao Yali Fan 《Journal of Computer and Communications》 2024年第4期186-200,共15页
One-class classification problem has become a popular problem in many fields, with a wide range of applications in anomaly detection, fault diagnosis, and face recognition. We investigate the one-class classification ... One-class classification problem has become a popular problem in many fields, with a wide range of applications in anomaly detection, fault diagnosis, and face recognition. We investigate the one-class classification problem for second-order tensor data. Traditional vector-based one-class classification methods such as one-class support vector machine (OCSVM) and least squares one-class support vector machine (LSOCSVM) have limitations when tensor is used as input data, so we propose a new tensor one-class classification method, LSOCSTM, which directly uses tensor as input data. On one hand, using tensor as input data not only enables to classify tensor data, but also for vector data, classifying it after high dimensionalizing it into tensor still improves the classification accuracy and overcomes the over-fitting problem. On the other hand, different from one-class support tensor machine (OCSTM), we use squared loss instead of the original loss function so that we solve a series of linear equations instead of quadratic programming problems. Therefore, we use the distance to the hyperplane as a metric for classification, and the proposed method is more accurate and faster compared to existing methods. The experimental results show the high efficiency of the proposed method compared with several state-of-the-art methods. 展开更多
关键词 Least Square one-Class Support Tensor machine one-Class Classification Upscale Least Square one-Class Support Vector machine one-Class Support Tensor machine
下载PDF
A machine learning approach for the prediction of pore pressure using well log data of Hikurangi Tuaheni Zone of IODP Expedition 372,New Zealand
8
作者 Goutami Das Saumen Maiti 《Energy Geoscience》 EI 2024年第2期225-231,共7页
Pore pressure(PP)information plays an important role in analysing the geomechanical properties of the reservoir and hydrocarbon field development.PP prediction is an essential requirement to ensure safe drilling opera... Pore pressure(PP)information plays an important role in analysing the geomechanical properties of the reservoir and hydrocarbon field development.PP prediction is an essential requirement to ensure safe drilling operations and it is a fundamental input for well design,and mud weight estimation for wellbore stability.However,the pore pressure trend prediction in complex geological provinces is challenging particularly at oceanic slope setting,where sedimentation rate is relatively high and PP can be driven by various complex geo-processes.To overcome these difficulties,an advanced machine learning(ML)tool is implemented in combination with empirical methods.The empirical method for PP prediction is comprised of data pre-processing and model establishment stage.Eaton's method and Porosity method have been used for PP calculation of the well U1517A located at Tuaheni Landslide Complex of Hikurangi Subduction zone of IODP expedition 372.Gamma-ray,sonic travel time,bulk density and sonic derived porosity are extracted from well log data for the theoretical framework construction.The normal compaction trend(NCT)curve analysis is used to check the optimum fitting of the low permeable zone data.The statistical analysis is done using the histogram analysis and Pearson correlation coefficient matrix with PP data series to identify potential input combinations for ML-based predictive model development.The dataset is prepared and divided into two parts:Training and Testing.The PP data and well log of borehole U1517A is pre-processed to scale in between[-1,+1]to fit into the input range of the non-linear activation/transfer function of the decision tree regression model.The Decision Tree Regression(DTR)algorithm is built and compared to the model performance to predict the PP and identify the overpressure zone in Hikurangi Tuaheni Zone of IODP Expedition 372. 展开更多
关键词 Well log Pore pressure machine learning(ML) IODP Hikurangi Tuaheni Zone IODP Expedition 372
下载PDF
Pioneering role of machine learning in unveiling intensive care unitacquired weakness
9
作者 Silvano Dragonieri 《World Journal of Clinical Cases》 SCIE 2024年第13期2157-2159,共3页
In the research published in the World Journal of Clinical Cases,Wang and Long conducted a quantitative analysis to delineate the risk factors for intensive care unit-acquired weakness(ICU-AW)utilizing advanced machin... In the research published in the World Journal of Clinical Cases,Wang and Long conducted a quantitative analysis to delineate the risk factors for intensive care unit-acquired weakness(ICU-AW)utilizing advanced machine learning methodologies.The study employed a multilayer perceptron neural network to accurately predict the incidence of ICU-AW,focusing on critical variables such as ICU stay duration and mechanical ventilation.This research marks a significant advancement in applying machine learning to clinical diagnostics,offering a new paradigm for predictive medicine in critical care.It underscores the importance of integrating artificial intelligence technologies in clinical practice to enhance patient management strategies and calls for interdisciplinary collaboration to drive innovation in healthcare. 展开更多
关键词 Intensive care unit-acquired weakness machine learning Multilayer perceptron neural network Predictive medicine Interdisciplinary collaboration
下载PDF
Effects of data smoothing and recurrent neural network(RNN)algorithms for real-time forecasting of tunnel boring machine(TBM)performance
10
作者 Feng Shan Xuzhen He +1 位作者 Danial Jahed Armaghani Daichao Sheng 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第5期1538-1551,共14页
Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk... Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk management.This study aims to use deep learning to develop real-time models for predicting the penetration rate(PR).The models are built using data from the Changsha metro project,and their performances are evaluated using unseen data from the Zhengzhou Metro project.In one-step forecast,the predicted penetration rate follows the trend of the measured penetration rate in both training and testing.The autoregressive integrated moving average(ARIMA)model is compared with the recurrent neural network(RNN)model.The results show that univariate models,which only consider historical penetration rate itself,perform better than multivariate models that take into account multiple geological and operational parameters(GEO and OP).Next,an RNN variant combining time series of penetration rate with the last-step geological and operational parameters is developed,and it performs better than other models.A sensitivity analysis shows that the penetration rate is the most important parameter,while other parameters have a smaller impact on time series forecasting.It is also found that smoothed data are easier to predict with high accuracy.Nevertheless,over-simplified data can lose real characteristics in time series.In conclusion,the RNN variant can accurately predict the next-step penetration rate,and data smoothing is crucial in time series forecasting.This study provides practical guidance for TBM performance forecasting in practical engineering. 展开更多
关键词 Tunnel boring machine(TBM) Penetration rate(PR) Time series forecasting Recurrent neural network(RNN)
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
11
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
Machine Learning Models for Heterogenous Network Security Anomaly Detection
12
作者 Mercy Diligence Ogah Joe Essien +1 位作者 Martin Ogharandukun Monday Abdullahi 《Journal of Computer and Communications》 2024年第6期38-58,共21页
The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptio... The increasing amount and intricacy of network traffic in the modern digital era have worsened the difficulty of identifying abnormal behaviours that may indicate potential security breaches or operational interruptions. Conventional detection approaches face challenges in keeping up with the ever-changing strategies of cyber-attacks, resulting in heightened susceptibility and significant harm to network infrastructures. In order to tackle this urgent issue, this project focused on developing an effective anomaly detection system that utilizes Machine Learning technology. The suggested model utilizes contemporary machine learning algorithms and frameworks to autonomously detect deviations from typical network behaviour. It promptly identifies anomalous activities that may indicate security breaches or performance difficulties. The solution entails a multi-faceted approach encompassing data collection, preprocessing, feature engineering, model training, and evaluation. By utilizing machine learning methods, the model is trained on a wide range of datasets that include both regular and abnormal network traffic patterns. This training ensures that the model can adapt to numerous scenarios. The main priority is to ensure that the system is functional and efficient, with a particular emphasis on reducing false positives to avoid unwanted alerts. Additionally, efforts are directed on improving anomaly detection accuracy so that the model can consistently distinguish between potentially harmful and benign activity. This project aims to greatly strengthen network security by addressing emerging cyber threats and improving their resilience and reliability. 展开更多
关键词 Cyber-Security Network Anomaly Detection machine Learning Random Forest Decision Tree Gaussian Naive Bayes
下载PDF
Security Monitoring and Management for the Network Services in the Orchestration of SDN-NFV Environment Using Machine Learning Techniques
13
作者 Nasser Alshammari Shumaila Shahzadi +7 位作者 Saad Awadh Alanazi Shahid Naseem Muhammad Anwar Madallah Alruwaili Muhammad Rizwan Abid Omar Alruwaili Ahmed Alsayat Fahad Ahmad 《Computer Systems Science & Engineering》 2024年第2期363-394,共32页
Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne... Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment. 展开更多
关键词 Software defined network network function virtualization network function virtualization management and orchestration virtual infrastructure manager virtual network function Kubernetes Kubectl artificial intelligence machine learning
下载PDF
Slope stability prediction based on a long short-term memory neural network:comparisons with convolutional neural networks,support vector machines and random forest models 被引量:4
14
作者 Faming Huang Haowen Xiong +4 位作者 Shixuan Chen Zhitao Lv Jinsong Huang Zhilu Chang Filippo Catani 《International Journal of Coal Science & Technology》 EI CAS CSCD 2023年第2期83-96,共14页
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode... The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models. 展开更多
关键词 Slope stability prediction Long short-term memory Deep learning Geo-Studio software machine learning model
下载PDF
Network Intrusion Detection Model Using Fused Machine Learning Technique 被引量:1
15
作者 Fahad Mazaed Alotaibi 《Computers, Materials & Continua》 SCIE EI 2023年第5期2479-2490,共12页
With the progress of advanced technology in the industrial revolution encompassing the Internet of Things(IoT)and cloud computing,cyberattacks have been increasing rapidly on a large scale.The rapid expansion of IoT a... With the progress of advanced technology in the industrial revolution encompassing the Internet of Things(IoT)and cloud computing,cyberattacks have been increasing rapidly on a large scale.The rapid expansion of IoT and networks in many forms generates massive volumes of data,which are vulnerable to security risks.As a result,cyberattacks have become a prevalent and danger to society,including its infrastructures,economy,and citizens’privacy,and pose a national security risk worldwide.Therefore,cyber security has become an increasingly important issue across all levels and sectors.Continuous progress is being made in developing more sophisticated and efficient intrusion detection and defensive methods.As the scale of complexity of the cyber-universe is increasing,advanced machine learning methods are the most appropriate solutions for predicting cyber threats.In this study,a fused machine learning-based intelligent model is proposed to detect intrusion in the early stage and thus secure networks from harmful attacks.Simulation results confirm the effectiveness of the proposed intrusion detection model,with 0.909 accuracy and a miss rate of 0.091. 展开更多
关键词 Cyberattack machine learning PREDICTION SOLUTION intrusion detection
下载PDF
Genetic algorithm-optimized backpropagation neural network establishes a diagnostic prediction model for diabetic nephropathy:Combined machine learning and experimental validation in mice 被引量:1
16
作者 WEI LIANG ZONGWEI ZHANG +5 位作者 KEJU YANG HONGTU HU QIANG LUO ANKANG YANG LI CHANG YUANYUAN ZENG 《BIOCELL》 SCIE 2023年第6期1253-1263,共11页
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D... Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN. 展开更多
关键词 Diabetic nephropathy Renal tubule machine learning Diagnostic model Genetic algorithm
下载PDF
Tunnelling performance prediction of cantilever boring machine in sedimentary hard-rock tunnel using deep belief network 被引量:2
17
作者 SONG Zhan-ping CHENG Yun +1 位作者 ZHANG Ze-kun YANG Teng-tian 《Journal of Mountain Science》 SCIE CSCD 2023年第7期2029-2040,共12页
Evaluating the adaptability of cantilever boring machine(CBM) through in-depth excavation and analysis of tunnel excavation data and rock mass parameters is the premise of mechanical design and efficient excavation in... Evaluating the adaptability of cantilever boring machine(CBM) through in-depth excavation and analysis of tunnel excavation data and rock mass parameters is the premise of mechanical design and efficient excavation in the field of underground space engineering.This paper presented a case study of tunnelling performance prediction method of CBM in sedimentary hard-rock tunnel of Karst landform type by using tunneling data and surrounding rock parameters.The uniaxial compressive strength(UCS),rock integrity factor(Kv),basic quality index([BQ]),rock quality index RQD,brazilian tensile strength(BTS) and brittleness index(BI) were introduced to construct a performance prediction database based on the hard-rock tunnel of Guiyang Metro Line 1 and Line 3,and then established the performance prediction model of cantilever boring machine.Then the deep belief network(DBN) was introduced into the performance prediction model,and the reliability of performance prediction model was verified by combining with engineering data.The study showed that the influence degree of surrounding rock parameters on the tunneling performance of the cantilever boring machine is UCS > [BQ] > BTS >RQD > Kv > BI.The performance prediction model shows that the instantaneous cutting rate(ICR) has a good correlation with the surrounding rock parameters,and the predicting model accuracy is related to the reliability of construction data.The prediction of limestone and dolomite sections of Line 3 based on the DBN performance prediction model shows that the measured ICR and predicted ICR is consistent and the built performance prediction model is reliable.The research results have theoretical reference significance for the applicability analysis and mechanical selection of cantilever boring machine for hard rock tunnel. 展开更多
关键词 Urban metro tunnel Cantilever boring machine Hard rock tunnel Performance prediction model Linear regression Deep belief network
下载PDF
Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks 被引量:1
18
作者 CAI Weibo YANG Shulin +2 位作者 SUN Gang ZHANG Qiming YU Hongfang 《ZTE Communications》 2023年第1期72-80,共9页
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network... In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment. 展开更多
关键词 distributed machine learning network awareness parameter server load distribution heterogeneous network
下载PDF
Significant risk factors for intensive care unit-acquired weakness:A processing strategy based on repeated machine learning 被引量:5
19
作者 Ling Wang Deng-Yan Long 《World Journal of Clinical Cases》 SCIE 2024年第7期1235-1242,共8页
BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective pr... BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration. 展开更多
关键词 Intensive care unit-acquired weakness Risk factors machine learning PREVENTION Strategies
下载PDF
Light gradient boosting machine with optimized hyperparameters for identification of malicious access in IoT network
20
作者 Debasmita Mishra Bighnaraj Naik +3 位作者 Janmenjoy Nayak Alireza Souri Pandit Byomakesha Dash S.Vimal 《Digital Communications and Networks》 SCIE CSCD 2023年第1期125-137,共13页
In this paper,an advanced and optimized Light Gradient Boosting Machine(LGBM)technique is proposed to identify the intrusive activities in the Internet of Things(IoT)network.The followings are the major contributions:... In this paper,an advanced and optimized Light Gradient Boosting Machine(LGBM)technique is proposed to identify the intrusive activities in the Internet of Things(IoT)network.The followings are the major contributions:i)An optimized LGBM model has been developed for the identification of malicious IoT activities in the IoT network;ii)An efficient evolutionary optimization approach has been adopted for finding the optimal set of hyper-parameters of LGBM for the projected problem.Here,a Genetic Algorithm(GA)with k-way tournament selection and uniform crossover operation is used for efficient exploration of hyper-parameter search space;iii)Finally,the performance of the proposed model is evaluated using state-of-the-art ensemble learning and machine learning-based model to achieve overall generalized performance and efficiency.Simulation outcomes reveal that the proposed approach is superior to other considered methods and proves to be a robust approach to intrusion detection in an IoT environment. 展开更多
关键词 IoT security Ensemble method Light gradient boosting machine machine learning Intrusion detection
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部