摘要
随机梯度下降算法已成为求解大规模有限和优化问题的流行算法,然而,由于其在迭代过程中会产生方差,导致了振荡现象。随机控制的随机梯度(SCSG)算法缩减了该方差,但SCSG算法对于步长有较强的限制。为了扩大SCSG算法的步长选择范围,基于1/t-带步长与Polyak步长,提出1/t-Polyak步长,并将其与SCSG算法结合,提出SCSGP算法。建立了SCSGP算法在强凸条件下的线性收敛性,数值实验表明SCSGP算法与其他随机梯度类算法相比有明显优势。
The stochastic gradient descent algorithm has become popular algorithm for solving large-scale finite-sum optimization problems. However, this algorithm leads to oscillations due to the variance in the iterative process. The stochastically controlled stochastic gradient (SCSG) algorithm reduces this variance, but the SCSG algorithm has strong limit on stepsize. To expand the range of stepsize selection of the SCSG algorithm, we propose 1/t-Polyak stepsize by combining the 1/t-band stepsize and the Polyak stepsize. Using this new stepsize for the stochastically controlled stochastic gradient (SCSG) algorithm, the SCSGP algorithm is proposed. We establish the linear convergence rate of SCSGP for strongly convex problems. Numerical experiments demonstrate a clear advantage of SCSGP over other stochastic gradient algorithms.
出处
《应用数学进展》
2024年第3期1008-1017,共10页
Advances in Applied Mathematics