摘要
在光滑问题随机方法中使用减小方差策略,能够有效改善算法的收敛效果.文中同时引用加权平均和减小方差的思想,求解"L1+L2+Hinge"非光滑强凸优化问题,得到减小方差加权随机算法(α-HRMDVR-W).在每步迭代过程中使用减小方差策略,并且以加权平均的方式输出,证明其具有最优收敛速率,并且该收敛速率不依赖样本数目.与已有减小方差方法相比,α-HRMDVR-W每次迭代中只使用部分样本代替全部样本修正梯度.实验表明α-HRMDVR-W在减小方差的同时也节省CPU时间.
Using the strategy of reducing the variance in smooth stochastic method can effectively improve the convergence of the algorithm. An algorithm, hybrid regularized mirror descent with reduced variance and weighted average (α-HRMDVR-W), is obtained by using weighted average and reduced variance for solving "L1 + L2 + Hinge" non-smooth strong convex optimization problem. The variance reduction strategies are utilized at each step of the iterative process, and the weighted average of the output mode is used. It is proved that the α-HRMDVR-W has optimal convergence rate and the convergence rate does not depend on the number of samples. Unlike the existing variance reduction methods, α-HRMDVR-W only uses a small portion of samples instead of the total samples to modify the gradient at each iteration. Experimental results show that α-HRMDVR-W reduces the variance and decreases CPU time.
出处
《模式识别与人工智能》
EI
CSCD
北大核心
2016年第7期577-589,共13页
Pattern Recognition and Artificial Intelligence
基金
国家自然科学基金项目(No.61273296)资助~~
关键词
机器学习
随机优化
减小方差
Machine Learning, Stochastic Optimization, Reduced Variance