期刊文献+

基于组合神经网络的Sarsa(λ)学习算法

Sarsa (λ) learning algorithm based on stacked neural network
下载PDF
导出
摘要 标准的Sarsa(λ)算法对状态空间的要求是离散的且空间较小,而实际问题中很多系统的状态空间是连续的或尽管是离散的但空间较大,这就需要很大的内存来存储状态动作对。为此提出组合神经网络,首先用自组织映射(SOM)神经网络对状态空间进行自适应量化,然后在此基础上用BP网络拟合Q函数。该方法实现了Sarsa(λ)算法在连续和大规模状态空间的泛化。最后,实验结果表明了该方法的有效性。 The standard Sarsa (λ) algorithm requires that the state space is discrete and small. However, in real word it does not satisfy that due to the fact that it may be continuous or discrete but has large state space, so it needs large number of memory to save state-action pairs. Therefore, a stacked neural network is proposed which uses a self-organizing map (SOM) neural network to quantize adaptively the space of states, then a BP network is used to approximate a Q-function. The method realizes the generalization of Sarsa (λ) algorithm in the continuous or large-scale states. The experiment shows the validity of the proposed algorithm.
出处 《计算机工程与设计》 CSCD 北大核心 2008年第22期5817-5819,5823,共4页 Computer Engineering and Design
关键词 组合神经网络 强化学习 自组织映射 BP网络 Sarsa算法 stacked neural network reinforcement learning self-organizing maps back propagation network Sarsa algorithm
  • 相关文献

参考文献12

  • 1Sutton R S,Barto A G.Reinforcement learning[M].MA:The MIT Press,1998.
  • 2Kaelbling L P, Littman M L, Moore A W. Reinforcement leaming:A survey[J].Journal of Artificial Intelligence Research, 1996,4(2):237-285.
  • 3Sutton R S.Learning to predit by the method of temporal differences[J].Machine Learing, 1988(3):9-44.
  • 4Watkins CJCH,Dayan P.Q-learning[J].Machine Learning,1992 (8):279-292.
  • 5Rummery G A,Niranjan M.On-line Q-learning using connectionist systems[R].Cambridge University Engineering Department, 1994.
  • 6Singh S P, Sutton R S.Reinforcement learning with replacing eligibility traces[J].Machine Learning, 1996,22:123-158.
  • 7Peng J, Williams R. Incremental multi-step Q-learning [J]. Machine Learning, 1996,22(4):283-290.
  • 8Andrew James Smith.Applications of the self-organising map to reinforcement learning [J]. Neural Networks, 2002,15 (8-9): 1107-1124.
  • 9林联明,王浩,王一雄.基于神经网络的Sarsa强化学习算法[J].计算机技术与发展,2006,16(1):30-32. 被引量:4
  • 10Kohonen T.Self-organizing maps[C].Springer Series in Information Sciences.New York,USA:Springer,2001.

二级参考文献9

  • 1阎平凡.再励学习——原理、算法及其在智能控制中的应用[J].信息与控制,1996,25(1):28-34. 被引量:30
  • 2Astom K J. Optimal control of Markov derision processes with incomplete state estimation[J ]. Math'Anal Appl, 1998,10:174 - 205.
  • 3Tsitsiklis J N, Roy B V. An Analysis of Temporal-Difference Learning with Function Approximation[J]. IEEE Transactions on Automatic Control, 1997,42 (5) : 674 - 690.
  • 4Tesauro G J. TD-gammon, a self- teaching backgammon program[J]. Neural Computation, 1994, 6(2) :215 - 2192.
  • 5Suton R S, Learning to predict by the methods of temporal diferences[J]. Machine Learning, 1988(3): 9 - 44.
  • 6Suton R S,Barto A G. Reinforcement Learning: Introduction[M].Cambridge,MA:MIT Press,1998.
  • 7Thrum Sebastian ,Mitcheil Tom M.Lifelong robot leaning[J].Robotics and Autonomous System.1995,15:25~46
  • 8Ben J.A.Krose,Joris W.Mvan Dam.Adaptive state space quantisition,for reiforcement learning of collide free navigation[J].1922 IEEE/RSJ Internation Conference on Intelligent Robots and System.Rakeigh,NC.July 7~10 ,1992:1327~1332
  • 9Watking,J.C.Hand Dayan Peter.Q-leaming[J].Machine Learning.1992,8:279~292

共引文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部