期刊文献+
共找到44篇文章
< 1 2 3 >
每页显示 20 50 100
Stochastic Gradient Compression for Federated Learning over Wireless Network
1
作者 Lin Xiaohan Liu Yuan +2 位作者 Chen Fangjiong Huang Yang Ge Xiaohu 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim... As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks. 展开更多
关键词 federated learning gradient compression quantization resource allocation stochastic gradient descent(SGD)
下载PDF
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
2
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
下载PDF
Feasibility of stochastic gradient boosting approach for predicting rockburst damage in burst-prone mines 被引量:4
3
作者 周健 史秀志 +2 位作者 黄仁东 邱贤阳 陈冲 《Transactions of Nonferrous Metals Society of China》 SCIE EI CAS CSCD 2016年第7期1938-1945,共8页
The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the... The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the ground support system capacity, the excavation span, the geological structure and the peak particle velocity of rockburst sites were analyzed. The performance of the model was evaluated using a 10 folds cross-validation (CV) procedure with 80%of original data during modeling, and an external testing set (20%) was employed to validate the prediction performance of the SGB model. Two accuracy measures for multi-class problems were employed: classification accuracy rate and Cohen’s Kappa. The accuracy analysis together with Kappa for the rockburst damage dataset reveals that the SGB model for the prediction of rockburst damage is acceptable. 展开更多
关键词 burst-prone mine rockburst damage stochastic gradient boosting method
下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:5
4
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(SGD)
下载PDF
Predicted Oil Recovery Scaling-Law Using Stochastic Gradient Boosting Regression Model
5
作者 Mohamed F.El-Amin Abdulhamit Subasi +1 位作者 Mahmoud M.Selim Awad Mousa 《Computers, Materials & Continua》 SCIE EI 2021年第8期2349-2362,共14页
In the process of oil recovery,experiments are usually carried out on core samples to evaluate the recovery of oil,so the numerical data are fitted into a non-dimensional equation called scaling-law.This will be essen... In the process of oil recovery,experiments are usually carried out on core samples to evaluate the recovery of oil,so the numerical data are fitted into a non-dimensional equation called scaling-law.This will be essential for determining the behavior of actual reservoirs.The global non-dimensional time-scale is a parameter for predicting a realistic behavior in the oil field from laboratory data.This non-dimensional universal time parameter depends on a set of primary parameters that inherit the properties of the reservoir fluids and rocks and the injection velocity,which dynamics of the process.One of the practical machine learning(ML)techniques for regression/classification problems is gradient boosting(GB)regression.The GB produces a prediction model as an ensemble of weak prediction models that can be done at each iteration by matching a least-squares base-learner with the current pseudoresiduals.Using a randomization process increases the execution speed and accuracy of GB.Hence in this study,we developed a stochastic regression model of gradient boosting(SGB)to forecast oil recovery.Different nondimensional time-scales have been used to generate data to be used with machine learning techniques.The SGB method has been found to be the best machine learning technique for predicting the non-dimensional time-scale,which depends on oil/rock properties. 展开更多
关键词 Machine learning stochastic gradient boosting linear regression TIME-SCALE oil recovery
下载PDF
Auxiliary Model Based Multi-innovation Stochastic Gradient Identification Methods for Hammerstein Output-Error System
6
作者 冯启亮 贾立 李峰 《Journal of Donghua University(English Edition)》 EI CAS 2017年第1期53-59,共7页
Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to rea... Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to realize the identification and separation of the Hammerstein model.As a result,the identification of the dynamic linear part can be separated from the static nonlinear elements without any redundant adjustable parameters.The auxiliary model based multi-innovation stochastic gradient algorithm was applied to identifying the serial link parameters of the Hammerstein model.The auxiliary model based multi-innovation stochastic gradient algorithm can avoid the influence of noise and improve the identification accuracy by changing the innovation length.The simulation results show the efficiency of the proposed method. 展开更多
关键词 Hammerstein output-error system special input signals auxiliary model based multi-innovation stochastic gradient algorithm innovation length
下载PDF
Stochastic Gradient Boosting Model for Twitter Spam Detection
7
作者 K.Kiruthika Devi G.A.Sathish Kumar 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期849-859,共11页
In today’s world of connectivity there is a huge amount of data than we could imagine.The number of network users are increasing day by day and there are large number of social networks which keeps the users connecte... In today’s world of connectivity there is a huge amount of data than we could imagine.The number of network users are increasing day by day and there are large number of social networks which keeps the users connected all the time.These social networks give the complete independence to the user to post the data either political,commercial or entertainment value.Some data may be sensitive and have a greater impact on the society as a result.The trustworthiness of data is important when it comes to public social networking sites like facebook and twitter.Due to the large user base and its openness there is a huge possibility to spread spam messages in this network.Spam detection is a technique to identify and mark data as a false data value.There are lot of machine learning approaches proposed to detect spam in social networks.The efficiency of any spam detection algorithm is determined by its cost factor and accuracy.Aiming to improve the detection of spam in the social networks this study proposes using statistical based features that are modelled through the supervised boosting approach called Stochastic gradient boosting to evaluate the twitter data sets in the English language.The performance of the proposed model is evaluated using simulation results. 展开更多
关键词 TWITTER SPAM stochastic gradient boosting
下载PDF
A stochastic gradient-based two-step sparse identification algorithm for multivariate ARX systems
8
作者 Yanxin Fu Wenxiao Zhao 《Control Theory and Technology》 EI CSCD 2024年第2期213-221,共9页
We consider the sparse identification of multivariate ARX systems, i.e., to recover the zero elements of the unknown parameter matrix. We propose a two-step algorithm, where in the first step the stochastic gradient (... We consider the sparse identification of multivariate ARX systems, i.e., to recover the zero elements of the unknown parameter matrix. We propose a two-step algorithm, where in the first step the stochastic gradient (SG) algorithm is applied to obtain initial estimates of the unknown parameter matrix and in the second step an optimization criterion is introduced for the sparse identification of multivariate ARX systems. Under mild conditions, we prove that by minimizing the criterion function, the zero elements of the unknown parameter matrix can be recovered with a finite number of observations. The performance of the algorithm is testified through a simulation example. 展开更多
关键词 ARX system stochastic gradient algorithm Sparse identification Support recovery Parameter estimation Strong consistency
原文传递
Online distributed optimization with stochastic gradients:high probability bound of regrets
9
作者 Yuchen Yang Kaihong Lu Long Wang 《Control Theory and Technology》 EI CSCD 2024年第3期419-430,共12页
In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate ... In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results. 展开更多
关键词 Distributed optimization Online optimization stochastic gradient High probability
原文传递
A Stochastic Gradient Descent Method for Computational Design of Random Rough Surfaces in Solar Cells
10
作者 Qiang Li Gang Bao +1 位作者 Yanzhao Cao Junshan Lin 《Communications in Computational Physics》 SCIE 2023年第10期1361-1390,共30页
In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimizati... In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces. 展开更多
关键词 Optimal design random rough surface solar cell Helmholtz equation stochastic gradient descent method
原文传递
Performance analysis of stochastic gradient algorithms under weak conditions 被引量:14
11
作者 DING Feng YANG HuiZhong LIU Fei 《Science in China(Series F)》 2008年第9期1269-1280,共12页
By using the stochastic martingale theory, convergence properties of stochastic gradient (SG) identification algorithms are studied under weak conditions. The analysis indicates that the parameter estimates by the S... By using the stochastic martingale theory, convergence properties of stochastic gradient (SG) identification algorithms are studied under weak conditions. The analysis indicates that the parameter estimates by the SG algorithms consistently converge to the true parameters, as long as the information vector is persistently exciting (i.e., the data product moment matrix has a bounded condition number) and that the process noises are zero mean and uncorrelated. These results remove the strict assumptions, made in existing references, that the noise variances and high-order moments exist, and the processes are stationary and ergodic and the strong persis- tent excitation condition holds. This contribution greatly relaxes the convergence conditions of stochastic gradient algorithms. The simulation results with bounded and unbounded noise variances confirm the convergence conclusions proposed. 展开更多
关键词 recursive identification parameter estimation least squares stochastic gradient multivariable systems convergence properties martingale convergence theorem
原文传递
Convergence of Stochastic Gradient Descent in Deep Neural Network 被引量:4
12
作者 Bai-cun ZHOU Cong-ying HAN Tian-de GUO 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2021年第1期126-136,共11页
Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing paramete... Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing parameters of deep neural network for their advantages of low storage space requirement and fast computation speed.Previous studies on convergence of these algorithms were based on some traditional assumptions in optimization problems.However,the deep neural network has its unique properties.Some assumptions are inappropriate in the actual optimization process of this kind of model.In this paper,we modify the assumptions to make them more consistent with the actual optimization process of deep neural network.Based on new assumptions,we studied the convergence and convergence rate of SGD and its two common variant algorithms.In addition,we carried out numerical experiments with LeNet-5,a common network framework,on the data set MNIST to verify the rationality of our assumptions. 展开更多
关键词 stochastic gradient descent deep neural network CONVERGENCE
原文传递
MODELING OF FREE JUMPS DOWNSTREAM SYMMETRIC AND ASYMMETRIC EXPANSIONS:THEORITICAL ANALYSIS AND METHOD OF STOCHASTIC GRADIENT BOOSTING 被引量:2
13
作者 MOHAMED A.Nassar 《Journal of Hydrodynamics》 SCIE EI CSCD 2010年第1期110-120,共11页
The general computational approach of Stochastic Gradient Boosting (SGB) is seen as one of the most powerful methods in predictive data mining. Its applications include regression analysis, classification problems w... The general computational approach of Stochastic Gradient Boosting (SGB) is seen as one of the most powerful methods in predictive data mining. Its applications include regression analysis, classification problems with/without continuous categorical predictors. The present theoretical and experimental study aims to model the free hydraulic jump created through rectangular Channels Downstream (DS) symmetric and asymmetric expansions using SGB. A theoretical model for prediction of the depth ratio of jumps is developed using the governing flow equations. At the same time, statistical models using linear regression are also developed. Three different parameters of the hydraulic jump are investigated experimentally using modified angled-guide walls. The results from the modified SGB model indicate a significant improvement on the original models. The present study shows the possibility of applying the modified SGB method in engineering designs and other practical applications. 展开更多
关键词 stochastic gradient Boosting (SGB) free jump SYMMETRIC ASYMMETRIC theoretical regression experiment
原文传递
Stochastic gradient algorithm for a dual-rate Box-Jenkins model based on auxiliary model and FIR model 被引量:2
14
作者 Jing CHEN Rui-feng DING 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期147-152,共6页
Based on the work in Ding and Ding(2008),we develop a modifed stochastic gradient(SG)parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model.We simplify the complex dual-rate Box-J... Based on the work in Ding and Ding(2008),we develop a modifed stochastic gradient(SG)parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model.We simplify the complex dual-rate Box-Jenkins model to two fnite impulse response(FIR)models,present an auxiliary model to estimate the missing outputs and the unknown noise variables,and compute all the unknown parameters of the system with colored noises.Simulation results indicate that the proposed method is efective. 展开更多
关键词 Parameter estimation Auxiliary model Dual-rate system stochastic gradient Box-Jenkins model FIR model
原文传递
Differentially private SGD with random features
15
作者 WANG Yi-guang GUO Zheng-chu 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2024年第1期1-23,共23页
In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data... In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data may contain some sensitive information,it is also of great significance to study privacy-preserving machine learning algorithms.This paper focuses on the performance of the differentially private stochastic gradient descent(SGD)algorithm based on random features.To begin,the algorithm maps the original data into a lowdimensional space,thereby avoiding the traditional kernel method for large-scale data storage requirement.Subsequently,the algorithm iteratively optimizes parameters using the stochastic gradient descent approach.Lastly,the output perturbation mechanism is employed to introduce random noise,ensuring algorithmic privacy.We prove that the proposed algorithm satisfies the differential privacy while achieving fast convergence rates under some mild conditions. 展开更多
关键词 learning theory differential privacy stochastic gradient descent random features reproducing kernel Hilbert spaces
下载PDF
FL-EASGD:Federated Learning Privacy Security Method Based on Homomorphic Encryption
16
作者 Hao Sun Xiubo Chen Kaiguo Yuan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2361-2373,共13页
Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obta... Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obtain the original data through model inference attacks.Therefore,safeguarding the privacy of model parameters becomes crucial.One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process.However,the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes.To solve the above problems,this paper proposes a privacy protection scheme named Federated Learning-Elastic Averaging Stochastic Gradient Descent(FL-EASGD)based on a fully homomorphic encryption algorithm.First,this paper introduces the homomorphic encryption algorithm into the FL-EASGD scheme to preventmodel plaintext leakage and realize privacy security in the process ofmodel aggregation.Second,this paper designs a robust model aggregation algorithm by adding time variables and constraint coefficients,which ensures the accuracy of model prediction while solving performance differences such as computation speed and node anomalies such as downtime of each participant.In addition,the scheme in this paper preserves the independent exploration of the local model by the nodes of each party,making the model more applicable to the local data distribution.Finally,experimental analysis shows that when there are abnormalities in the participants,the efficiency and accuracy of the whole protocol are not significantly affected. 展开更多
关键词 Federated learning homomorphic encryption privacy security stochastic gradient descent
下载PDF
Fine-Tuning Cyber Security Defenses: Evaluating Supervised Machine Learning Classifiers for Windows Malware Detection
17
作者 Islam Zada Mohammed Naif Alatawi +4 位作者 Syed Muhammad Saqlain Abdullah Alshahrani Adel Alshamran Kanwal Imran Hessa Alfraihi 《Computers, Materials & Continua》 SCIE EI 2024年第8期2917-2939,共23页
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar... Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats. 展开更多
关键词 Security and privacy challenges in the context of requirements engineering supervisedmachine learning malware detection windows systems comparative analysis Gaussian Naive Bayes K Nearest Neighbors stochastic gradient Descent Classifier Decision Tree
下载PDF
Performance Enhancement of Adaptive Neural Networks Based on Learning Rate
18
作者 Swaleha Zubair Anjani Kumar Singha +3 位作者 Nitish Pathak Neelam Sharma Shabana Urooj Samia Rabeh Larguech 《Computers, Materials & Continua》 SCIE EI 2023年第1期2005-2019,共15页
Deep learning is the process of determining parameters that reduce the cost function derived from the dataset.The optimization in neural networks at the time is known as the optimal parameters.To solve optimization,it... Deep learning is the process of determining parameters that reduce the cost function derived from the dataset.The optimization in neural networks at the time is known as the optimal parameters.To solve optimization,it initialize the parameters during the optimization process.There should be no variation in the cost function parameters at the global minimum.The momentum technique is a parameters optimization approach;however,it has difficulties stopping the parameter when the cost function value fulfills the global minimum(non-stop problem).Moreover,existing approaches use techniques;the learning rate is reduced during the iteration period.These techniques are monotonically reducing at a steady rate over time;our goal is to make the learning rate parameters.We present a method for determining the best parameters that adjust the learning rate in response to the cost function value.As a result,after the cost function has been optimized,the process of the rate Schedule is complete.This approach is shown to ensure convergence to the optimal parameters.This indicates that our strategy minimizes the cost function(or effective learning).The momentum approach is used in the proposed method.To solve the Momentum approach non-stop problem,we use the cost function of the parameter in our proposed method.As a result,this learning technique reduces the quantity of the parameter due to the impact of the cost function parameter.To verify that the learning works to test the strategy,we employed proof of convergence and empirical tests using current methods and the results are obtained using Python. 展开更多
关键词 Deep learning OPTIMIZATION CONVERGENCE stochastic gradient methods
下载PDF
Chimp Optimization Algorithm Based Feature Selection with Machine Learning for Medical Data Classification
19
作者 Firas Abedi Hayder M.A.Ghanimi +6 位作者 Abeer D.Algarni Naglaa F.Soliman Walid El-Shafai Ali Hashim Abbas Zahraa H.Kareem Hussein Muhi Hariz Ahmed Alkhayyat 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期2791-2814,共24页
Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discoveri... Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discovering correlations,patterns,and causal structures within datasets.In the healthcare domain,association rules offer valuable opportunities for building knowledge bases,enabling intelligent diagnoses,and extracting invaluable information rapidly.This paper presents a novel approach called the Machine Learning based Association Rule Mining and Classification for Healthcare Data Management System(MLARMC-HDMS).The MLARMC-HDMS technique integrates classification and association rule mining(ARM)processes.Initially,the chimp optimization algorithm-based feature selection(COAFS)technique is employed within MLARMC-HDMS to select relevant attributes.Inspired by the foraging behavior of chimpanzees,the COA algorithm mimics their search strategy for food.Subsequently,the classification process utilizes stochastic gradient descent with a multilayer perceptron(SGD-MLP)model,while the Apriori algorithm determines attribute relationships.We propose a COA-based feature selection approach for medical data classification using machine learning techniques.This approach involves selecting pertinent features from medical datasets through COA and training machine learning models using the reduced feature set.We evaluate the performance of our approach on various medical datasets employing diverse machine learning classifiers.Experimental results demonstrate that our proposed approach surpasses alternative feature selection methods,achieving higher accuracy and precision rates in medical data classification tasks.The study showcases the effectiveness and efficiency of the COA-based feature selection approach in identifying relevant features,thereby enhancing the diagnosis and treatment of various diseases.To provide further validation,we conduct detailed experiments on a benchmark medical dataset,revealing the superiority of the MLARMCHDMS model over other methods,with a maximum accuracy of 99.75%.Therefore,this research contributes to the advancement of feature selection techniques in medical data classification and highlights the potential for improving healthcare outcomes through accurate and efficient data analysis.The presented MLARMC-HDMS framework and COA-based feature selection approach offer valuable insights for researchers and practitioners working in the field of healthcare data mining and machine learning. 展开更多
关键词 Association rule mining data classification healthcare data machine learning parameter tuning data mining feature selection MLARMC-HDMS COA stochastic gradient descent Apriori algorithm
下载PDF
Routing with Cooperative Nodes Using Improved Learning Approaches
20
作者 R.Raja N.Satheesh +1 位作者 J.Britto Dennis C.Raghavendra 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2857-2874,共18页
In IoT,routing among the cooperative nodes plays an incredible role in fulfilling the network requirements and enhancing system performance.The eva-luation of optimal routing and related routing parameters over the dep... In IoT,routing among the cooperative nodes plays an incredible role in fulfilling the network requirements and enhancing system performance.The eva-luation of optimal routing and related routing parameters over the deployed net-work environment is challenging.This research concentrates on modelling a memory-based routing model with Stacked Long Short Term Memory(s-LSTM)and Bi-directional Long Short Term Memory(b-LSTM).It is used to hold the routing information and random routing to attain superior performance.The pro-posed model is trained based on the searching and detection mechanisms to com-pute the packet delivery ratio(PDR),end-to-end(E2E)delay,throughput,etc.The anticipated s-LSTM and b-LSTM model intends to ensure Quality of Service(QoS)even in changing network topology.The performance of the proposed b-LSTM and s-LSTM is measured by comparing the significance of the model with various prevailing approaches.Sometimes,the performance is measured with Mean Absolute Error(MAE)and Root Mean Square Error(RMSE)for mea-suring the error rate of the model.The prediction of error rate is made with Learn-ing-based Stochastic Gradient Descent(L-SGD).This gradual gradient descent intends to predict the maximal or minimal error through successive iterations.The simulation is performed in a MATLAB 2020a environment,and the model performance is evaluated with diverse approaches.The anticipated model intends to give superior performance in contrast to prevailing approaches. 展开更多
关键词 Internet of Things(IoT) stacked long short term memory bi-directional long short term memory error rate stochastic gradient descent
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部