The Complexity of Markov Decision Processes
Citations Over TimeTop 10% of 1987 papers
Abstract
We investigate the complexity of the classical problem of optimal policy computation in Markov decision processes. All three variants of the problem (finite horizon, infinite horizon discounted, and infinite horizon average cost) were known to be solvable in polynomial time by dynamic programming (finite horizon problems), linear programming, or successive approximation techniques (infinite horizon). We show that they are complete for P, and therefore most likely cannot be solved by highly parallel algorithms. We also show that, in contrast, the deterministic cases of all three problems can be solved very fast in parallel. The version with partially observed states is shown to be PSPACE-complete, and thus even less likely to be solved in polynomial time than the NP-complete problems; in fact, we show that, most likely, it is not possible to have an efficient on-line implementation (involving polynomial time on-line computations and memory) of an optimal policy, even if an arbitrary amount of precomputation is allowed. Finally, the variant of the problem in which there are no observations is shown to be NP-complete.
Related Papers
- → Hex ist PSPACE-vollst�ndig(1981)106 cited
- → Gobang ist PSPACE-vollst�ndig(1980)54 cited
- → A Sample-Efficient Algorithm for Episodic Finite-Horizon MDP with Constraints(2021)14 cited
- → Leader-Follower semi-Markov Decision Problems: Theoretical Framework and Approximate Solution(2007)6 cited
- → Allocating Services to Applications using Markov Decision Processes(2007)8 cited