期刊文献+
共找到1,510篇文章
< 1 2 76 >
每页显示 20 50 100
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
1
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
下载PDF
Convergence of Hyperbolic Neural Networks Under Riemannian Stochastic Gradient Descent
2
作者 Wes Whiting Bao Wang Jack Xin 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1175-1188,共14页
We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a ... We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algorithm.We show numerical simulations of these algorithms on various benchmarks. 展开更多
关键词 Hyperbolic neural network Riemannian gradient descent Riemannian Adam(RAdam) Training convergence
下载PDF
Fractional Gradient Descent RBFNN for Active Fault-Tolerant Control of Plant Protection UAVs
3
作者 Lianghao Hua Jianfeng Zhang +1 位作者 Dejie Li Xiaobo Xi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2129-2157,共29页
With the increasing prevalence of high-order systems in engineering applications, these systems often exhibitsignificant disturbances and can be challenging to model accurately. As a result, the active disturbance rej... With the increasing prevalence of high-order systems in engineering applications, these systems often exhibitsignificant disturbances and can be challenging to model accurately. As a result, the active disturbance rejectioncontroller (ADRC) has been widely applied in various fields. However, in controlling plant protection unmannedaerial vehicles (UAVs), which are typically large and subject to significant disturbances, load disturbances andthe possibility of multiple actuator faults during pesticide spraying pose significant challenges. To address theseissues, this paper proposes a novel fault-tolerant control method that combines a radial basis function neuralnetwork (RBFNN) with a second-order ADRC and leverages a fractional gradient descent (FGD) algorithm.We integrate the plant protection UAV model’s uncertain parameters, load disturbance parameters, and actuatorfault parameters and utilize the RBFNN for system parameter identification. The resulting ADRC exhibits loaddisturbance suppression and fault tolerance capabilities, and our proposed active fault-tolerant control law hasLyapunov stability implications. Experimental results obtained using a multi-rotor fault-tolerant test platformdemonstrate that the proposed method outperforms other control strategies regarding load disturbance suppressionand fault-tolerant performance. 展开更多
关键词 Radial basis function neural network plant protection unmanned aerial vehicle active disturbance rejection controller fractional gradient descent algorithm
下载PDF
求解一类非光滑凸优化问题的相对加速SGD算法
4
作者 张文娟 冯象初 +2 位作者 肖锋 黄姝娟 李欢 《西安电子科技大学学报》 EI CAS CSCD 北大核心 2024年第3期147-157,共11页
一阶优化算法由于其计算简单、代价小,被广泛应用于机器学习、大数据科学、计算机视觉等领域,然而,现有的一阶算法大多要求目标函数具有Lipschitz连续梯度,而实际中的很多应用问题不满足该要求。在经典的梯度下降算法基础上,引入随机和... 一阶优化算法由于其计算简单、代价小,被广泛应用于机器学习、大数据科学、计算机视觉等领域,然而,现有的一阶算法大多要求目标函数具有Lipschitz连续梯度,而实际中的很多应用问题不满足该要求。在经典的梯度下降算法基础上,引入随机和加速,提出一种相对加速随机梯度下降算法。该算法不要求目标函数具有Lipschitz连续梯度,而是通过将欧氏距离推广为Bregman距离,从而将Lipschitz连续梯度条件减弱为相对光滑性条件。相对加速随机梯度下降算法的收敛性与一致三角尺度指数有关,为避免调节最优一致三角尺度指数参数的工作量,给出一种自适应相对加速随机梯度下降算法。该算法可自适应地选取一致三角尺度指数参数。对算法收敛性的理论分析表明,算法迭代序列的目标函数值收敛于最优目标函数值。针对Possion反问题和目标函数的Hessian阵算子范数随变量范数多项式增长的极小化问题的数值实验表明,自适应相对加速随机梯度下降算法和相对加速随机梯度下降算法的收敛性能优于相对随机梯度下降算法。 展开更多
关键词 凸优化 非光滑优化 相对光滑 随机规划 梯度方法 加速随机梯度下降
下载PDF
激光相干合成系统中SPGD算法的分阶段自适应优化
5
作者 郑文慧 祁家琴 +6 位作者 江文隽 谭贵元 胡奇琪 高怀恩 豆嘉真 邸江磊 秦玉文 《红外与激光工程》 EI CSCD 北大核心 2024年第9期303-315,共13页
为改善传统随机并行梯度下降(Stochastic Parallel Gradient Descent,SPGD)算法应用于大规模激光相干合成系统时收敛速度慢且易陷入局部最优解的情况,提出了一种分阶段自适应增益SPGD算法-Staged SPGD算法。该算法根据性能评价函数值,... 为改善传统随机并行梯度下降(Stochastic Parallel Gradient Descent,SPGD)算法应用于大规模激光相干合成系统时收敛速度慢且易陷入局部最优解的情况,提出了一种分阶段自适应增益SPGD算法-Staged SPGD算法。该算法根据性能评价函数值,在不同收敛时期采用不同策略对增益系数进行自适应调整,同时引入含梯度更新因子的控制电压更新策略,在加快收敛速度的同时减少算法陷入局部极值的概率。实验结果表明:在19路激光相干合成系统中,与传统SPGD算法相比,Staged SPGD算法的收敛速度提升了36.84%,针对不同频率和幅度的相位噪声,算法也具有较优的收敛效果,且稳定性得到显著提升。此外,将Staged SPGD算法直接应用于37、61、91路相干合成系统时,Staged SPGD算法相比传统SPGD算法收敛速度分别提升了37.88%、40.85%和41.10%,提升效果随相干合成单元数增加而更加显著,表明该算法在收敛速度、稳定性和扩展性方面均具有一定优势,具备扩展到大规模相干合成系统的潜力。 展开更多
关键词 激光相干合成 相位控制 随机并行梯度下降算法 SPgd算法
下载PDF
Rockburst Intensity Grade Prediction Model Based on Batch Gradient Descent and Multi-Scale Residual Deep Neural Network
6
作者 Yu Zhang Mingkui Zhang +1 位作者 Jitao Li Guangshu Chen 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1987-2006,共20页
Rockburst is a phenomenon in which free surfaces are formed during excavation,which subsequently causes the sudden release of energy in the construction of mines and tunnels.Light rockburst only peels off rock slices ... Rockburst is a phenomenon in which free surfaces are formed during excavation,which subsequently causes the sudden release of energy in the construction of mines and tunnels.Light rockburst only peels off rock slices without ejection,while severe rockburst causes casualties and property loss.The frequency and degree of rockburst damage increases with the excavation depth.Moreover,rockburst is the leading engineering geological hazard in the excavation process,and thus the prediction of its intensity grade is of great significance to the development of geotechnical engineering.Therefore,the prediction of rockburst intensity grade is one problem that needs to be solved urgently.By comprehensively considering the occurrence mechanism of rockburst,this paper selects the stress index(σθ/σc),brittleness index(σ_(c)/σ_(t)),and rock elastic energy index(Wet)as the rockburst evaluation indexes through the Spearman coefficient method.This overcomes the low accuracy problem of a single evaluation index prediction method.Following this,the BGD-MSR-DNN rockburst intensity grade prediction model based on batch gradient descent and a multi-scale residual deep neural network is proposed.The batch gradient descent(BGD)module is used to replace the gradient descent algorithm,which effectively improves the efficiency of the network and reduces the model training time.Moreover,the multi-scale residual(MSR)module solves the problem of network degradation when there are too many hidden layers of the deep neural network(DNN),thus improving the model prediction accuracy.The experimental results reveal the BGDMSR-DNN model accuracy to reach 97.1%,outperforming other comparable models.Finally,actual projects such as Qinling Tunnel and Daxiangling Tunnel,reached an accuracy of 100%.The model can be applied in mines and tunnel engineering to realize the accurate and rapid prediction of rockburst intensity grade. 展开更多
关键词 Rockburst prediction rockburst intensity grade deep neural network batch gradient descent multi-scale residual
下载PDF
Anderson Acceleration of Gradient Methods with Energy for Optimization Problems
7
作者 Hailiang Liu Jia-Hao He Xuping Tian 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1299-1318,共20页
Anderson acceleration(AA)is an extrapolation technique designed to speed up fixed-point iterations.For optimization problems,we propose a novel algorithm by combining the AA with the energy adaptive gradient method(AE... Anderson acceleration(AA)is an extrapolation technique designed to speed up fixed-point iterations.For optimization problems,we propose a novel algorithm by combining the AA with the energy adaptive gradient method(AEGD)[arXiv:2010.05109].The feasibility of our algorithm is ensured in light of the convergence theory for AEGD,though it is not a fixed-point iteration.We provide rigorous convergence rates of AA for gradient descent(GD)by an acceleration factor of the gain at each implementation of AA-GD.Our experimental results show that the proposed AA-AEGD algorithm requires little tuning of hyperparameters and exhibits superior fast convergence. 展开更多
关键词 Anderson acceleration(AA) gradient descent(gd) Energy stability
下载PDF
Stochastic Gradient Compression for Federated Learning over Wireless Network
8
作者 Lin Xiaohan Liu Yuan +2 位作者 Chen Fangjiong Huang Yang Ge Xiaohu 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim... As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks. 展开更多
关键词 federated learning gradient compression quantization resource allocation stochastic gradient descent(Sgd)
下载PDF
FL-EASGD:Federated Learning Privacy Security Method Based on Homomorphic Encryption
9
作者 Hao Sun Xiubo Chen Kaiguo Yuan 《Computers, Materials & Continua》 SCIE EI 2024年第5期2361-2373,共13页
Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obta... Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obtain the original data through model inference attacks.Therefore,safeguarding the privacy of model parameters becomes crucial.One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process.However,the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes.To solve the above problems,this paper proposes a privacy protection scheme named Federated Learning-Elastic Averaging Stochastic Gradient Descent(FL-EASGD)based on a fully homomorphic encryption algorithm.First,this paper introduces the homomorphic encryption algorithm into the FL-EASGD scheme to preventmodel plaintext leakage and realize privacy security in the process ofmodel aggregation.Second,this paper designs a robust model aggregation algorithm by adding time variables and constraint coefficients,which ensures the accuracy of model prediction while solving performance differences such as computation speed and node anomalies such as downtime of each participant.In addition,the scheme in this paper preserves the independent exploration of the local model by the nodes of each party,making the model more applicable to the local data distribution.Finally,experimental analysis shows that when there are abnormalities in the participants,the efficiency and accuracy of the whole protocol are not significantly affected. 展开更多
关键词 Federated learning homomorphic encryption privacy security stochastic gradient descent
下载PDF
Differentially private SGD with random features
10
作者 WANG Yi-guang GUO Zheng-chu 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2024年第1期1-23,共23页
In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data... In the realm of large-scale machine learning,it is crucial to explore methods for reducing computational complexity and memory demands while maintaining generalization performance.Additionally,since the collected data may contain some sensitive information,it is also of great significance to study privacy-preserving machine learning algorithms.This paper focuses on the performance of the differentially private stochastic gradient descent(SGD)algorithm based on random features.To begin,the algorithm maps the original data into a lowdimensional space,thereby avoiding the traditional kernel method for large-scale data storage requirement.Subsequently,the algorithm iteratively optimizes parameters using the stochastic gradient descent approach.Lastly,the output perturbation mechanism is employed to introduce random noise,ensuring algorithmic privacy.We prove that the proposed algorithm satisfies the differential privacy while achieving fast convergence rates under some mild conditions. 展开更多
关键词 learning theory differential privacy stochastic gradient descent random features reproducing kernel Hilbert spaces
下载PDF
GDLIN:一种利用梯度下降的学习索引 被引量:1
11
作者 陈珊珊 高隽 马振禹 《计算机科学》 CSCD 北大核心 2023年第S01期527-532,共6页
在大数据时代,数据访问速度是衡量大规模存储系统性能的一个重要指标,而索引是用于提升数据库系统中数据存取性能的主要技术之一。近几年,使用机器学习模型代替B+树等传统索引,拟合数据分布规律,将数据的间接查找优化为函数直接计算的... 在大数据时代,数据访问速度是衡量大规模存储系统性能的一个重要指标,而索引是用于提升数据库系统中数据存取性能的主要技术之一。近几年,使用机器学习模型代替B+树等传统索引,拟合数据分布规律,将数据的间接查找优化为函数直接计算的学习索引(Learned Index,LI)被提出,LI提高了查询的速度,减少了索引空间开销。但是LI的拟合误差较大,不支持插入等修改性操作。文中提出了一种利用梯度下降算法拟合数据的学习索引模型GDLIN(A Learned Index By Gradient Descent)。GDLIN利用梯度下降算法更好地拟合数据,减少拟合误差,缩短本地查找的时间;同时递归调用数据拟合算法,充分利用键的分布规律,构建上层结构,避免索引结构随着数据量而增大。另外,GDLIN利用链表解决LI不支持数据插入的问题。实验结果表明,GDLIN在无新数据插入的情况下,吞吐量是B+树的2.1倍;在插入操作占比为50%的情况下,是LI的1.08倍。 展开更多
关键词 学习索引 梯度下降 拟合数据模型 链表
下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:5
12
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(Sgd)
下载PDF
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
13
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method EIGENVALUE sufficient descent condition global convergence
下载PDF
基于ILSTM-AMSGD神经网络的时间序列预测方法 被引量:1
14
作者 杨爽 李文静 乔俊飞 《控制工程》 CSCD 北大核心 2023年第10期1793-1800,共8页
针对标准长短期记忆(long short-term memory,LSTM)神经网络的结构参数众多、训练过程耗时长问题,提出一种基于自适应动量随机梯度下降(adaptive momentum stochastic gradient descent,AMSGD)算法的改进型长短期记忆神经网络(ILSTM-AM... 针对标准长短期记忆(long short-term memory,LSTM)神经网络的结构参数众多、训练过程耗时长问题,提出一种基于自适应动量随机梯度下降(adaptive momentum stochastic gradient descent,AMSGD)算法的改进型长短期记忆神经网络(ILSTM-AMSGD神经网络),并将其用于时间序列预测中。首先,通过简化结构方程中的递归项权值,减少网络中所需训练的参数。其次,设计一种AMSGD算法对神经网络结构参数进行学习。最后,通过2个基准数据集和1个实际数据集对ILSTM-AMSGD神经网络模型在时间序列预测中的准确性和运行效率进行实验验证。结果表明,递归项权值简化方法可以提高模型的泛化能力,同时AMSGD算法加快了模型的收敛速度。与其他模型相比,ILSTM-AMSGD神经网络模型实现了对时间序列更加高效、准确的预测。 展开更多
关键词 时间序列预测 改进型长短期记忆神经网络 权重精简 梯度下降算法 自适应 动量
下载PDF
基于SGDM优化IWOA-CNN的配电网工程造价控制研究 被引量:9
15
作者 李康 鲍刚 +1 位作者 徐瑞 刘毅楷 《广西大学学报(自然科学版)》 CAS 北大核心 2023年第3期692-702,共11页
为了控制配电网工程项目的成本,需准确预测配电网工程造价,本文提出一种基于带动量因子的随机梯度下降(stochastic gradient descent with momentum factor, SGDM)优化的改进鲸鱼算法-卷积神经网络工程造价预测模型。首先,考虑回路数、... 为了控制配电网工程项目的成本,需准确预测配电网工程造价,本文提出一种基于带动量因子的随机梯度下降(stochastic gradient descent with momentum factor, SGDM)优化的改进鲸鱼算法-卷积神经网络工程造价预测模型。首先,考虑回路数、杆塔数、导线、地形、地质、风速、覆冰、导线截面、混凝土杆、塔材、绝缘子(直线)、绝缘子(耐张)、基坑开方、基础钢材、底盘和水泥对配电网工程造价的影响,建立了非线性函数关系;采用SGDM优化器改进的卷积神经网络对函数进行逼近,并用贝叶斯方法优化卷积神经网络的超参数;利用改进的鲸鱼算法(improved whale optimization algorithm, IWOA)优化卷积神经网络,找出卷积神经网络的最优学习率。数值算例表明,新模型预测效果较好,并提出相应的控制策略。 展开更多
关键词 配电网工程造价 鲸鱼算法 卷积神经网络 随机梯度下降优化器 贝叶斯优化 非线性收敛因子 自适应权重
下载PDF
PROJECTED GRADIENT DESCENT BASED ON SOFT THRESHOLDING IN MATRIX COMPLETION 被引量:1
16
作者 Zhao Yujuan Zheng Baoyu Chen Shouning 《Journal of Electronics(China)》 2013年第6期517-524,共8页
Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermin... Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermined equations based on sparsity prior in singular values set of the unknown matrix,which also calls low-rank prior of the unknown matrix.This paper firstly introduces basic concept of matrix completion,analyses the matrix suitably used in matrix completion,and shows that such matrix should satisfy two conditions:low rank and incoherence property.Then the paper provides three reconstruction algorithms commonly used in matrix completion:singular value thresholding algorithm,singular value projection,and atomic decomposition for minimum rank approximation,puts forward their shortcoming to know the rank of original matrix.The Projected Gradient Descent based on Soft Thresholding(STPGD),proposed in this paper predicts the rank of unknown matrix using soft thresholding,and iteratives based on projected gradient descent,thus it could estimate the rank of unknown matrix exactly with low computational complexity,this is verified by numerical experiments.We also analyze the convergence and computational complexity of the STPGD algorithm,point out this algorithm is guaranteed to converge,and analyse the number of iterations needed to reach reconstruction error.Compared the computational complexity of the STPGD algorithm to other algorithms,we draw the conclusion that the STPGD algorithm not only reduces the computational complexity,but also improves the precision of the reconstruction solution. 展开更多
关键词 Matrix Completion (MC) Compressed Sensing (CS) Iterative thresholding algorithm Projected gradient descent based on Soft Thresholding (STPgd
下载PDF
An Efficient Energy Routing Protocol Based on Gradient Descent Method in WSNs 被引量:1
17
作者 Ru Jin Xinlian Zhou Yue Wang 《Journal of Information Hiding and Privacy Protection》 2020年第3期115-123,共9页
In a wireless sensor network[1],the operation of a node depends on the battery power it carries.Because of the environmental reasons,the node cannot replace the battery.In order to improve the life cycle of the networ... In a wireless sensor network[1],the operation of a node depends on the battery power it carries.Because of the environmental reasons,the node cannot replace the battery.In order to improve the life cycle of the network,energy becomes one of the key problems in the design of the wireless sensor network(WSN)routing protocol[2].This paper proposes a routing protocol ERGD based on the method of gradient descent that can minimizes the consumption of energy.Within the communication radius of the current node,the distance between the current node and the next hop node is assumed that can generate a projected energy at the distance from the current node to the base station(BS),this projected energy and the remaining energy of the next hop node is the key factor in finding the next hop node.The simulation results show that the proposed protocol effectively extends the life cycle of the network and improves the reliability and fault tolerance of the system. 展开更多
关键词 Wireless sensor network gradient descent residual energy communication radius network life cycle
下载PDF
基于SGD算法优化的BP神经网络围岩参数反演模型研究 被引量:1
18
作者 孙泽 宋战平 +1 位作者 岳波 杨子凡 《隧道建设(中英文)》 CSCD 北大核心 2023年第12期2066-2076,共11页
为充分利用现场监测数据所反馈的围岩变形信息,对岩体力学参数进行反演,以贵州省剑河至黎平高速公路TJ-1标段牛练塘隧道为工程背景,选择围岩弹性模量、黏聚力、泊松比及内摩擦角为影响因素,通过设计正交试验及有限元模拟,获取25组围岩... 为充分利用现场监测数据所反馈的围岩变形信息,对岩体力学参数进行反演,以贵州省剑河至黎平高速公路TJ-1标段牛练塘隧道为工程背景,选择围岩弹性模量、黏聚力、泊松比及内摩擦角为影响因素,通过设计正交试验及有限元模拟,获取25组围岩物理力学参数组合及其对应的拱顶沉降值和拱腰收敛模拟值。基于随机梯度下降算法(stochastic gradient descent algorithm,简称SGD算法)对传统BP神经网络模型进行改进,建立以拱顶沉降值和拱腰收敛值为输入参数,以围岩弹性模量、黏聚力、泊松比及内摩擦角为输出值的基于SGD算法优化的BP神经网络模型,实现围岩参数的反演分析。将反演所得的围岩参数代入有限元模型,验证优化BP神经网络模型的可行性和准确性。最后,分析围岩变形及初期支护受力特性并给出施工建议。结果表明:1)基于SGD算法优化的BP神经网络模型计算得出的拱顶沉降值、拱腰收敛值、拱肩收敛值与现场实测值的相对误差率在2.50%~24.01%,均低于传统BP神经网络模型计算得出的误差率(11.51%~93.71%),验证优化BP神经网络模型的可行性和优越性;2)上、下台阶拱脚处的喷层和锚杆有应力集中现象,有破坏风险,建议施工中加强拱脚支护,防止发生工程事故。 展开更多
关键词 隧道工程 围岩参数反演 随机梯度下降算法 神经网络 正交试验法 数值模拟
下载PDF
基于改进GD-HASLR算法的遮挡人脸识别 被引量:1
19
作者 徐恬恬 席志红 《电子科技》 2023年第6期72-79,共8页
针对遮挡人脸识别方面的算法在训练样本数目减少时,其识别结果也会下降。为了解决该问题,文中提出了一种改进的GD-HASLR(Gradient Direction-Based Hierarchical Adaptive Sparse and Low-Rank)算法。该算法先求得人脸图像的广义梯度方... 针对遮挡人脸识别方面的算法在训练样本数目减少时,其识别结果也会下降。为了解决该问题,文中提出了一种改进的GD-HASLR(Gradient Direction-Based Hierarchical Adaptive Sparse and Low-Rank)算法。该算法先求得人脸图像的广义梯度方向,计算人脸图像从一阶到三阶的梯度大小和梯度方向,再利用映射函数进行映射后求得梯度方向向量,然后将其作为层次稀疏低秩模型的输入,求解出图像的表示系数和误差。文中采用了重启的快速迭代收缩阈值算法-Ⅱ求解稀疏表示系数。最后,计算一阶到三阶测试样本的残差,选取其频率最高或者平均等级最低的类别作为分类结果。在AR、Extended Yale B数据库上的实验结果表明,与GD-HASLR等方法相比,文中改进方法获得的识别效果更好。 展开更多
关键词 遮挡 人脸识别 广义梯度方向 梯度大小 梯度方向 层次稀疏低秩模型 重启快速迭代收缩阈值算法-Ⅱ gd-HASLR
下载PDF
Designing fuzzy inference system based on improved gradient descent method
20
作者 Zhang Liquan Shao Cheng 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第4期853-857,863,共6页
The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and e... The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and extrapolating missing rules, by means of confidence measure and the improved gradient descent method. The proposed approach can not only identify fuzzy model, update its parameters and determine optimal output fuzzy sets simultaneously, but also resolve the uncontrollable problem led by the regions that data do not cover. The simulation results show the effectiveness and accuracy of the proposed approach with the classical truck backer-upper control problem verifying. 展开更多
关键词 data mining fuzzy system gradient descent method missing rule.
下载PDF
上一页 1 2 76 下一页 到第
使用帮助 返回顶部