Key research themes
1. What are the computational complexity challenges and algorithmic solutions for solving Markov Decision Processes (MDPs)?
This research focuses on understanding the computational hardness of MDPs, the class of algorithms designed for exact and approximate solutions, and the exploitation of problem-specific structure to improve efficiency. It matters because MDPs underpin many applications from AI planning to operations research, but theoretical polynomial-time solvability contrasts with practical inefficiency, especially for large-scale problems.
2. How can uncertainty, ambiguity, and incomplete/inaccurate information be represented and incorporated in Markov Decision Processes?
This research area investigates extensions of classical MDPs to settings where rewards, state observability, or model parameters are uncertain or imprecise. Capturing such uncertainty more faithfully leads to richer models like fuzzy reward MDPs, partially observable MDPs (POMDPs), and robust planners that consider adversarial or misspecified transitions. Accounting for this uncertainty is essential for realistic decision-making and policy robustness.
3. How can Markov Decision Processes be applied and extended in specific domains such as autonomous driving and healthcare modeling?
This theme covers the utilization of advanced MDP models—often augmented by probabilistic logic or fuzzy representations—for behavior selection, planning, and economic evaluation in applied contexts. The focus is on how tailored MDP frameworks can effectively model complex temporal decision problems in domains like self-driving car behavior control and healthcare resource allocation, incorporating domain-specific constraints and uncertainties.