摘要
提出一种用于支持向量回归的网络优化策略.学习策略分为两个阶段:首先训练支持向量机,得到支持向量回归的初始结构和参数,构造一个无阈值的支持向量回归网络;然后通过带有遗忘因子的递归最小二乘算法,优化计算支持向量回归网络的权值,以达到更好的函数拟合精度.与支持向量回归相比,这种策略可以得到最优的权值和阈值.仿真结果表明,该网络性能优良,具有在线应用的潜力.
A new network optimization strategy of support vector regression (SVR) is presented. The proposed learning method includes two steps: First, the initial structure and parameters of SVR are obtained by learning, which is applied to construct support vector regression network without bias. Then the weights of SVRN are updated by the recursive least square method with forgetting factor for the better function approximation accuracy. Compared with the SVR, it can find the optimal weights and bias. The simulation result shows the effectiveness of the optimization method and can be applied to model online.
出处
《控制与决策》
EI
CSCD
北大核心
2006年第7期837-840,共4页
Control and Decision
基金
国家863计划项目(2002AA412010)
科技部科技攻关项目(2003EG113016)
北京市教委重点学科共建项目
关键词
支持向量回归
递归最小二乘算法
网络优化
Support vector regression
Recursive least squares
network optimization