Continuous time Markov decision programming (shortly, CTMDP) with discount return criterion investigated in this note is {S,[(A(i), (i)), i∈S], q, r, α}. In this model the state set S is countable; the action set A(...Continuous time Markov decision programming (shortly, CTMDP) with discount return criterion investigated in this note is {S,[(A(i), (i)), i∈S], q, r, α}. In this model the state set S is countable; the action set A(i)is non-empty, (i)is a σ-algebra on A(i) which contains all single point sets of A(i); the family of the transition rate q(j|i, a)展开更多
This paper focuses on the constrained optimality problem (COP) of first passage discrete-time Markov decision processes (DTMDPs) in denumerable state and compact Borel action spaces with multi-constraints, state-d...This paper focuses on the constrained optimality problem (COP) of first passage discrete-time Markov decision processes (DTMDPs) in denumerable state and compact Borel action spaces with multi-constraints, state-dependent discount factors, and possibly unbounded costs. By means of the properties of a so-called occupation measure of a policy, we show that the constrained optimality problem is equivalent to an (infinite-dimensional) linear programming on the set of occupation measures with some constraints, and thus prove the existence of an optimal policy under suitable conditions. Furthermore, using the equivalence between the constrained optimality problem and the linear programming, we obtain an exact form of an optimal policy for the case of finite states and actions. Finally, as an example, a controlled queueing system is given to illustrate our results.展开更多
文摘Continuous time Markov decision programming (shortly, CTMDP) with discount return criterion investigated in this note is {S,[(A(i), (i)), i∈S], q, r, α}. In this model the state set S is countable; the action set A(i)is non-empty, (i)is a σ-algebra on A(i) which contains all single point sets of A(i); the family of the transition rate q(j|i, a)
基金This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61374067, 41271076).
文摘This paper focuses on the constrained optimality problem (COP) of first passage discrete-time Markov decision processes (DTMDPs) in denumerable state and compact Borel action spaces with multi-constraints, state-dependent discount factors, and possibly unbounded costs. By means of the properties of a so-called occupation measure of a policy, we show that the constrained optimality problem is equivalent to an (infinite-dimensional) linear programming on the set of occupation measures with some constraints, and thus prove the existence of an optimal policy under suitable conditions. Furthermore, using the equivalence between the constrained optimality problem and the linear programming, we obtain an exact form of an optimal policy for the case of finite states and actions. Finally, as an example, a controlled queueing system is given to illustrate our results.