期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Borel状态空间中平均零和随机博弈的新条件
1
作者 郭先平 廖景浩 +1 位作者 谭梓祺 温馨 《中国科学:数学》 CSCD 北大核心 2024年第12期1963-1978,共16页
本文研究Borel状态空间的离散时间Markov平均博弈.对报酬函数可以无界的一般情形,本文用平均最优双不等式取代相应的Shapley方程,提出比现有的几何遍历性条件更弱的新条件.在此新的条件下,本文建立上述平均最优双不等式的可解性,并由此... 本文研究Borel状态空间的离散时间Markov平均博弈.对报酬函数可以无界的一般情形,本文用平均最优双不等式取代相应的Shapley方程,提出比现有的几何遍历性条件更弱的新条件.在此新的条件下,本文建立上述平均最优双不等式的可解性,并由此证明平均博弈的值和Nash均衡策略的存在性.进而,在较强的几何遍历性条件下,用本文的最优双不等式,证明Shapley方程的可解性.最后,用电力系统与金融保险中的例子验证本文的条件,阐明本文的结果. 展开更多
关键词 零和平均随机博弈 最优性条件 平均最优双不等式 Shapley方程 Nash均衡策略
原文传递
First passage Markov decision processes with constraints and varying discount factors 被引量:2
2
作者 Xiao WU Xiaolong ZOU xianping guo 《Frontiers of Mathematics in China》 SCIE CSCD 2015年第4期1005-1023,共19页
This paper focuses on the constrained optimality problem (COP) of first passage discrete-time Markov decision processes (DTMDPs) in denumerable state and compact Borel action spaces with multi-constraints, state-d... This paper focuses on the constrained optimality problem (COP) of first passage discrete-time Markov decision processes (DTMDPs) in denumerable state and compact Borel action spaces with multi-constraints, state-dependent discount factors, and possibly unbounded costs. By means of the properties of a so-called occupation measure of a policy, we show that the constrained optimality problem is equivalent to an (infinite-dimensional) linear programming on the set of occupation measures with some constraints, and thus prove the existence of an optimal policy under suitable conditions. Furthermore, using the equivalence between the constrained optimality problem and the linear programming, we obtain an exact form of an optimal policy for the case of finite states and actions. Finally, as an example, a controlled queueing system is given to illustrate our results. 展开更多
关键词 Discrete-time Markov decision process (DTMDP) constrainedoptimality varying discount factor unbounded cost
原文传递
Convergence of Markov decision processes with constraints and state-action dependent discount factors 被引量:2
3
作者 Xiao Wu xianping guo 《Science China Mathematics》 SCIE CSCD 2020年第1期167-182,共16页
This paper is concerned with the convergence of a sequence of discrete-time Markov decision processes(DTMDPs)with constraints,state-action dependent discount factors,and possibly unbounded costs.Using the convex analy... This paper is concerned with the convergence of a sequence of discrete-time Markov decision processes(DTMDPs)with constraints,state-action dependent discount factors,and possibly unbounded costs.Using the convex analytic approach under mild conditions,we prove that the optimal values and optimal policies of the original DTMDPs converge to those of the"limit"one.Furthermore,we show that any countablestate DTMDP can be approximated by a sequence of finite-state DTMDPs,which are constructed using the truncation technique.Finally,we illustrate the approximation by solving a controlled queueing system numerically,and give the corresponding error bound of the approximation. 展开更多
关键词 discrete-time Markov decision processes state-action dependent discount factors unbounded costs CONVERGENCE
原文传递
Optimal stopping time on discounted semi-Markov processes
4
作者 Fang CHEN xianping guo Zhong-Wei LIAO 《Frontiers of Mathematics in China》 SCIE CSCD 2021年第2期303-324,共22页
This paper attempts to study the optimal stopping time for semi- Markov processes (SMPs) under the discount optimization criteria with unbounded cost rates. In our work, we introduce an explicit construction of the eq... This paper attempts to study the optimal stopping time for semi- Markov processes (SMPs) under the discount optimization criteria with unbounded cost rates. In our work, we introduce an explicit construction of the equivalent semi-Markov decision processes (SMDPs). The equivalence is embodied in the expected discounted cost functions of SMPs and SMDPs, that is, every stopping time of SMPs can induce a policy of SMDPs such that the value functions are equal, and vice versa. The existence of the optimal stopping time of SMPs is proved by this equivalence relation. Next, we give the optimality equation of the value function and develop an effective iterative algorithm for computing it. Moreover, we show that the optimal and ε-optimal stopping time can be characterized by the hitting time of the special sets. Finally, to illustrate the validity of our results, an example of a maintenance system is presented in the end. 展开更多
关键词 Optimal stopping time semi-Markov processes(SMPs) value function semi-Markov decision processes(SMDPs) optimal policy iterative lgorithm
原文传递
TOTAL REWARD CRITERIA FOR UNCONSTRAINED/CONSTRAINED CONTINUOUS-TIME MARKOV DECISION PROCESSES
5
作者 xianping guo Lanlan ZHANG 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2011年第3期491-505,共15页
This paper studies denumerable continuous-time Markov decision processes with expected total reward criteria. The authors first study the unconstrained model with possible unbounded transition rates, and give suitable... This paper studies denumerable continuous-time Markov decision processes with expected total reward criteria. The authors first study the unconstrained model with possible unbounded transition rates, and give suitable conditions on the controlled system's primitive data under which the authors show the existence of a solution to the total reward optimality equation and also the existence of an optimal stationary policy. Then, the authors impose a constraint on an expected total cost, and consider the associated constrained model. Basing on the results about the unconstrained model and using the Lagrange multipliers approach, the authors prove the existence of constrained-optimal policies under some additional conditions. Finally, the authors apply the results to controlled queueing systems. 展开更多
关键词 Constrained-optimal policy continuous-time Markov decision process optimal policy total reward criterion unbounded reward/cost and transition rates.
原文传递
Correction to a proposition related to ergodicity of Markov chains
6
作者 Ziqi TAN xianping guo 《Frontiers of Mathematics in China》 SCIE CSCD 2021年第3期801-813,共13页
Proposition 5.5.6(ii)in the book Markov Chains and Stochastic Stability(2nd ed,Cambridge Univ.Press,2009)has been used in the proof of a theorem about ergodicity of Markov chains.Unfortunately,an example in this paper... Proposition 5.5.6(ii)in the book Markov Chains and Stochastic Stability(2nd ed,Cambridge Univ.Press,2009)has been used in the proof of a theorem about ergodicity of Markov chains.Unfortunately,an example in this paper shows that this proposition is not always true.Thus,a correction of this proposition is provided. 展开更多
关键词 Markov chain split chain petite set minorization condition
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部