摘要
几乎所有的稀疏随机算法都来源于在线形式,只能获得平均输出方式的收敛速率,对于强凸优化问题无法达到最优的瞬时收敛速率.文中避开在线形式转到随机模式,直接研究随机优化算法.首先在含有L1正则化项的稀疏优化问题中加入L2正则化项,使之具有强凸特性.然后将黑箱优化方法中的随机步长策略引入到当前通用的结构优化算法COMID中,得到基于随机步长的混合正则化镜面下降稀疏随机优化算法.最后通过分析L1正则化问题中软阈值方法的求解特点,证明算法具有最优的瞬时收敛速率.实验表明,文中算法的稀疏性优于COMID.
Almost all sparse stochastic algorithms are developed from the online setting, and only the convergence rate of average output can be obtained. The optimal rate for strongly convex optimization problems can not be reached as well. The stochastic optimization algorithms are directly studied instead of the online to batch conversation in this paper. Firstly, by incorporating the L2 regularizer into the Ll-regularized sparse optimization problems, the strong convexity can be obtained. Then, by introducing the random step-size strategy from the black-box optimization method to the state-of-the-art algorithm-composite objective mirror descent (COMID) , a sparse stochastic optimization algorithm based introducing on random step-size hybrid regularized mirror descent (RS-HMD) is achieved. Finally, based on the analysis of characteristics of soft threshold methods in solving the L1-regularized problem, the optimal individual convergence rate is proved. Experimental results demonstrate that sparsity of RS- HMD is better than that of COMID.
出处
《模式识别与人工智能》
EI
CSCD
北大核心
2015年第10期876-885,共10页
Pattern Recognition and Artificial Intelligence
基金
国家自然科学基金项目(No.61273296)
安徽省自然科学基金青年项目(No.1508085QF114
1308085QF121)资助
关键词
机器学习
随机优化
最优瞬时收敛速率
稀疏性
Machine Learning, Stochastic Optimization, Optimal Individual Convergence Rate, Sparsity