摘要
SVR(支持向量回归)是一种具有较强稳健性的小样本学习方法,它可有效避免"维数灾难",并被引入到全局优化中。然而现有的基于SVR的全局优化算法存在估值次数多、无法应对高维优化问题等缺点。提出了一种基于增量SVR模型的新的改进全局优化算法DISVR:采用增量SVR方法提高过程响应面的重构效率;采用一种新的增量LHD(Latin Hyper-cube Sampling)方法确保样本集分布均匀;采用DIRECT搜索算法提高全局搜索的稳定性和效率。最后,通过多个测试函数表明,该算法既降低了时间复杂度,也有效减少了源模型的估值次数。
SVR(Support Vector Regression) is a kind of small sample learning method with strong robustness.It can effectively avoid 'dimension disaster',and is introduced to the global optimization.However,the existing global optimization algorithms based on SVR have several shortcomings,such as large number of evaluations,can not cope with high dimensional optimization problem and so on.We proposed a new improved global optimization algorithm DISVR based on incremental SVR model:an incremental SVR method to improve the efficiency of reconstruction process response,a new incremental LHD sampling(Latin Hyper-cube Sampling)to ensure an uniform distribution of samples,DIRECT search algorithm to enhance the stability and efficiency of the global search.Finally,the result of test functions suggests that the proposed method both reduce the time complexity,also effectively reduce number of source model's evaluations.
出处
《计算机科学》
CSCD
北大核心
2012年第4期185-188,共4页
Computer Science
基金
国家自然科学基金(50775084
50975107)资助
关键词
全局优化
响应面
支持向量回归
增量法
Global optimization
Response surface
Support vector regression
Incremental method