摘要
The state equations of stochastic control problems,which are controlled stochastic differential equations,are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved.Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk.This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.
The state equations of stochastic control problems, which are controlled stochastic differential equations, are proposed to be discretized by the weak midpoint rule and predictor-corrector methods for the Markov chain approximation approach. Local consistency of the methods are proved. Numerical tests on a simplified Merton's portfolio model show better simulation to feedback control rules by these two methods, as compared with the weak Euler-Maruyama discretisation used by Krawczyk. This suggests a new approach of improving accuracy of approximating Markov chains for stochastic control problems.
基金
supported by the China Postdoctoral Science Foundation (No.20080430402).