期刊文献+

基于历史梯度平均方差缩减的协同参数更新方法 被引量:5

Collaborative Parameter Update Based on Average Variance Reduction of Historical Gradients
下载PDF
导出
摘要 随机梯度下降算法(SGD)随机使用一个样本估计梯度,造成较大的方差,使机器学习模型收敛减慢且训练不稳定。该文提出一种基于方差缩减的分布式SGD,命名为DisSAGD。该方法采用历史梯度平均方差缩减来更新机器学习模型中的参数,不需要完全梯度计算或额外存储,而是通过使用异步通信协议来共享跨节点的参数。为了解决全局参数分发存在的“更新滞后”问题,该文采用具有加速因子的学习速率和自适应采样策略:一方面当参数偏离最优值时,增大加速因子,加快收敛速度;另一方面,当一个工作节点比其他工作节点快时,为下一次迭代采样更多样本,使工作节点有更多时间来计算局部梯度。实验表明:DisSAGD显著减少了循环迭代的等待时间,加速了算法的收敛,其收敛速度比对照方法更快,在分布式集群中可以获得近似线性的加速。 The Stochastic Gradient Descent(SGD)algorithm randomly picks up a sample to estimate gradients,creating big variance which reduces the convergence speed and makes the training unstable.A Distributed SGD based on Average variance reduction,called DisSAGD is proposed.The method uses the average variance reduction based on historical gradients to update parameters in the machine learning model,requiring little gradient calculation and additional storage,but using the asynchronous communication protocol to share parameters across nodes.In order to solve the“update staleness”problem of global parameter distribution,a learning rate with an acceleration factor and an adaptive sampling strategy are included:on the one hand,when the parameter deviates from the optimal value,the acceleration factor is increased to speed up the convergence;on the other hand,when one work node is faster than the other ones,more samples are sampled for the next iteration,so that the node has more time to calculate the local gradient.Experiments show that the DisSAGD reduces significantly the waiting time of loop iterations,accelerates the convergence of the algorithm being faster than that of the controlled methods,and obtains almost linear acceleration in distributed cluster environments.
作者 谢涛 张春炯 徐永健 XIE Tao;ZHANG Chunjiong;XU Yongjian(Wisdom Education Institute of College of Education,Southwest University,Chongqing 400715,China;College of Electronics and Information Engineering,Tongji University,Shanghai 201804,China;College of Computers and Information Science,Southwest University,Chongqing 400715,China)
出处 《电子与信息学报》 EI CSCD 北大核心 2021年第4期956-964,共9页 Journal of Electronics & Information Technology
基金 国家自然科学基金(61807027)。
关键词 梯度下降 机器学习 分布式集群 自适应采样 方差缩减 Gradient descent Machine learning Distributed cluster Adaptive sampling Variance reduction
  • 相关文献

参考文献4

二级参考文献6

共引文献44

同被引文献38

引证文献5

二级引证文献11

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部