摘要
用更为紧凑的方法表示和存贮值函数 ,以求解大规模平均模型 Markov决策规划(MDP)问题 .通过状态集结相对值迭代算法逼近值函数 ,用 Span半范数和压缩映射原理分析算法的收敛性 .给出了状态集结后的 Bellman最优方程 .在 Span压缩条件下证明了该算法的收敛性 。
To represent and store cost to go functions with more compact representations than lookup tables in scaling up average reward Markov decision processes, the state aggregation with relative value iteration algorithm was used to approximate the value function, the Span semi norm and the contraction mapping law were used to analyse the convergence of the algorithm. The Bellman equation for the state aggregation model was given. The convergence result was proved and an error bound for the proposed algorithm was presented under the condition of contraction with Span semi norm.
出处
《北京理工大学学报》
EI
CAS
CSCD
2000年第3期304-308,共5页
Transactions of Beijing Institute of Technology
基金
国家自然科学基金资助项目! (6 96 740 0 5 )
关键词
动态规划
状态集结
随机控制
值函数逼近
dynamic programming
Markov decision processes
compact repre sentation
state aggregation
average reward