Academia.eduAcademia.edu

Outline

Identification of Optimal Policies in Markov Decision Processes

2010, Kybernetika

https://doi.org/10.1287/OPRE.21.3.848

Abstract

In this note we focus attention on identifying optimal policies and on elimination suboptimal policies minimizing optimality criteria in discrete-time Markov decision processes with finite state space and compact action set. We present unified approach to value iteration algorithms that enables to generate lower and upper bounds on optimal values, as well as on the current policy. Using the modified value iterations it is possible to eliminate suboptimal actions and to identify an optimal policy or nearly optimal policies in a finite number of steps without knowing precise values of the performance function.

References (15)

  1. D. Cruz-Suárez and R. Montes-de-Oca: Uniform convergence of the value iteration policies for discounted Markov decision processes. Bol. de la Soc. Mat. Mexicana 12 (2006), 133-148.
  2. D. Cruz-Suárez, R. Montes-de-Oca, and F. Salem-Silva: Uniform approximations of discounted Markov decision processes to optimal policies. Proceedings of Prague Stochastics 2006 (M. Hušková and M. Janžura, eds.), Matfyzpress, Prague 2006, pp. 278-287.
  3. J. Grinold: Elimination of suboptimal actions in Markov decision problems. Oper. Res. 21 (1973), 848-851.
  4. N. A. J. Hastings: Bounds on the gain of a Markov decision processes. Oper. Res. 19 (1971), 240-243.
  5. N. A. J. Hastings and J. Mello: Tests for suboptimal actions in discounted Markov programming. Manag. Sci. 19 (1971), 1019-1022.
  6. N. A. J. Hastings and J. Mello: Tests for suboptimal actions in undiscounted Markov decision chains. Manag. Sci. 23 (1976), 87-91.
  7. J. MacQueen: A modified dynamic programming method for Markov decision prob- lems. J. Math. Anal. Appl. 14 (1966), 38-43.
  8. J. MacQueen: A test of suboptimal actions in Markovian decision problems. Oper. Res. 15 (1967), 559-561.
  9. A. R. Odoni: On finding the maximal gain for Markov decision processes. Oper. Res. 17 (1969), 857-860.
  10. M. L. Puterman and M. C. Shin: Modified policy iteration algorithms for discounted Markov decision problems. Manag. Sci. 24 (1978), 1127-1137.
  11. M. L. Puterman and M. C. Shin: Action elimination procedures for modified policy iteration algorithm. Oper. Res. 30 (1982), 301-318.
  12. M. L. Puterman: Markov Decision Processes -Discrete Stochastic Dynamic Program- ming. Wiley, New York 1994.
  13. K. Sladký: O metodě postupných aproximací pro nalezení optimálního řízení markovského řetězce (On successive approximation method for finding optimal control of a Markov chain). Kybernetika 4 (1969), 2, 167-176.
  14. D. J. White: Dynamic programming, Markov chains and the method of successive approximation. J. Math. Anal. Appl. 6 (1963), 296-306.
  15. Karel Sladký, Institute of Information Theory and Automation -Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 4, 182 08 Praha 8. Czech Republic. e-mail: sladky@utia.cas.cz