1Boutilier C, Dean T, Hank S. Decision theoretic planning: structural assumptions and computational leverage[J]. Journal of Artificial Intelligence Research, 1999, 11 : 1 - 49.
2Astrom K J. Optimal control of Markov decision processes with incomplete state estimation[J]. Journal of Mathematical Analysis and Application, 1965, 10:174 - 205.
3Eagle J. The optimal search for a moving target when the search path is constrained[J]. Operations Research, 1984, 32: 1107- 1115.
4Sondik E J. The optimal control of partially observable Markov processes over the infinite horizon: discounted case[J]. Operations Research, 1978, 26:282 - 304.
5Cassandra A R. A survey of POMDP applications[C]//Proceedings of AAAI Full Symposium on Planning with Partially Observable Markov Decision Processes. 1998, 17- 24.
6White C C. Seherer W T. Solution procedures for parlially observed Markov decision processes [J]. Operations Research. 1989, 37(5): 791-797.
7Smallwood R D. Sondik E J. Optimal control of partially observable processes over the finite horizon[J]. Operations Research. 1973, 21: 1071- 1088.
8Sondik E J. The optimal control of partially observable Markov processes[D]. Department of Electrical Engineering, Stanford University, Stanford, CA, 1971.
9Monahan G. A survey of partially observable Markov decision processes: theory, models, and algorithm[J]. Manage Science, 1982, 28(1):1-16.
10Cheng H. Algorithms for partially observed Markov decision processes[D]. Faculty of Commerce and Business Administration. University of British Columbia, 1988.