Academia.eduAcademia.edu

Outline

Q-Learning Lagrange Policies for Multi-Action Restless Bandits

Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining

https://doi.org/10.1145/3447548.3467370

Abstract

Multi-action restless multi-armed bandits (RMABs) are a powerful framework for constrained resource allocation in which independent processes are managed. However, previous work only study the offline setting where problem dynamics are known. We address this restrictive assumption, designing the first algorithms for learning good policies for Multi-action RMABs online using combinations of Lagrangian relaxation and Q-learning. Our first approach, MAIQL, extends a method for Q-learning the Whittle index in binary-action RMABs to the multi-action setting. We derive a generalized update rule and convergence proof and establish that, under standard assumptions, MAIQL converges to the asymptotically optimal multi-action RMAB policy as → ∞. However, MAIQL relies on learning Q-functions and indexes on two timescales which leads to slow convergence and requires problem structure to perform well. Thus, we design a second algorithm, LPQL, which learns the well-performing and more general Lagrange policy for multi-action RMABs by learning to minimize the Lagrange bound through a variant of Q-learning. To ensure fast convergence, we take an approximation strategy that enables learning on a single timescale, then give a guarantee relating the approximation's precision to an upper bound of LPQL's return as → ∞. Finally, we show that our approaches always outperform baselines across multiple settings, including one derived from real-world medication adherence data. CCS CONCEPTS • Computing methodologies → Reinforcement learning.

References (32)

  1. Jinane Abounadi, Dimitrib Bertsekas, and Vivek S Borkar. 2001. Learning algo- rithms for Markov decision processes with average cost. SIAM J. Control Optim. 40, 3 (2001), 681-698.
  2. Daniel Adelman and Adam J Mersereau. 2008. Relaxations of weakly coupled stochastic dynamic programs. Oper. Res. 56, 3 (2008), 712-727.
  3. Konstantin Avrachenkov and Vivek S Borkar. 2020. Whittle index based q- learning for restless bandits with average reward. arXiv preprint arXiv:2004.14427 (2020).
  4. Biswarup Bhattacharya. 2018. Restless bandits visiting villages: A preliminary study on distributing public health services. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies. 1-8.
  5. Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, and Milind Tambe. 2021. Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare. In Proceedings of the 30th International Joint Conference on Artificial Intelligence.
  6. Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, and Milind Tambe. 2021. Learning Index Policies for Restless Bandits with Application to Maternal Healthcare. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. 1467-1468.
  7. José Côté, Marie-Chantal Fortin, Patricia Auger, Geneviève Rouleau, Sylvie Dubois, Nathalie Boudreau, Isabelle Vaillant, and Élisabeth Gélinas-Lemay. 2018. Web-based tailored intervention to support optimal medication adherence among kidney transplant recipients: pilot parallel-group randomized controlled trial. JMIR Form. Res. 2, 2 (2018), e14.
  8. Jing Fu, Yoni Nazarathy, Sarat Moka, and Peter G Taylor. 2019. Towards Q- learning the Whittle Index for Restless Bandits. In 2019 Australian & New Zealand Control Conference (ANZCC). IEEE, 249-254.
  9. Tomer Gafni and Kobi Cohen. 2020. Learning in restless multi-armed bandits via adaptive arm sequencing rules. IEEE Trans. Automat. Control (2020).
  10. Robert Gardner. 2017. Convex Functions. https://faculty.etsu.edu/gardnerr/5210/ Beamer-Proofs/Proofs-6-6-print.pdf. Accessed: 2021-01-15.
  11. Kevin D Glazebrook, David J Hodge, and Chris Kirkbride. 2011. General notions of indexability for queueing control and asset management. Ann. Appl. Probab. (2011), 876-907.
  12. Kevin D Glazebrook, D. Ruiz-Hernandez, and Chris Kirkbride. 2006. Some in- dexable families of restless bandit problems. Adv. Appl. Probab. 38, 3 (2006), 643-672.
  13. Yasin Gocgun and Archis Ghate. 2012. Lagrangian relaxation and constraint generation for allocation and advanced scheduling. Computers & Operations Research 39, 10 (2012), 2323-2336.
  14. LLC Gurobi Optimization. 2021. Gurobi Optimizer Reference Manual. http: //www.gurobi.com
  15. Jeffrey Thomas Hawkins. 2003. A Langrangian decomposition approach to weakly coupled dynamic optimization problems and its applications. Ph.D. Dissertation. Massachusetts Institute of Technology.
  16. David J Hodge and Kevin D Glazebrook. 2015. On the asymptotic optimality of greedy index heuristics for multi-action restless bandits. Adv. Appl. Probab 47, 3 (2015), 652-667.
  17. Fabio Iannello, Osvaldo Simeone, and Umberto Spagnolini. 2012. Optimality of myopic scheduling and whittle indexability for energy harvesting sensors. In 2012 46th Annual Conference on Information Sciences and Systems (CISS). IEEE, 1-6.
  18. Jackson A Killian, Andrew Perrault, and Milind Tambe. 2021. Beyond "To Act or Not to Act": Fast Lagrangian Approaches to General Multi-Action Restless Bandits. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems.
  19. Jackson A Killian, Bryan Wilder, Amit Sharma, Vinod Choudhary, Bistra Dilkina, and Milind Tambe. 2019. Learning to prescribe interventions for tuberculosis patients using digital adherence data. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2430-2438.
  20. Chandrashekar Lakshminarayanan and Shalabh Bhatnagar. 2017. A stability criterion for two timescale stochastic approximation schemes. Automatica 79 (2017), 108-114.
  21. Elliot Lee, Mariel S Lavieri, and Michael Volk. 2019. Optimal screening for hepatocellular carcinoma: A restless bandit model. Manuf. Serv. Oper. Manag. 21, 1 (2019), 198-212.
  22. Keqin Liu and Qing Zhao. 2010. Indexability of restless bandit problems and optimality of whittle index for dynamic multichannel access. IEEE Trans. Inf. Theory 56, 11 (2010), 5547-5567.
  23. Aditya Mate, Jackson A Killian, Haifend Xu, Andrew Perrault, and Milind Tambe. 2020. Collapsing Bandits and Their Application to Public Health Interventions. In Neural Information Processing Systems, NeurIPS.
  24. Christos H Papadimitriou and John N Tsitsiklis. 1994. The complexity of optimal queueing network control. In Proceedings of IEEE 9th Annual Conference on Structure in Complexity Theory. IEEE, 318-322.
  25. Martin L Puterman. 2014. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons.
  26. Yundi Qian, Chao Zhang, Bhaskar Krishnamachari, and Milind Tambe. 2016. Restless poachers: Handling exploration-exploitation tradeoffs in security do- mains. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. 123-131.
  27. Diego Ruiz-Hernández, Jesús M Pinar-Pérez, and David Delgado-Gómez. 2020. Multi-machine preventive maintenance scheduling with imperfect interventions: A restless bandit approach. Comput. Oper. Res. 119 (2020), 104927.
  28. Bejjipuram Sombabu, Aditya Mate, D Manjunath, and Sharayu Moharir. 2020. Whittle index for AoI-aware scheduling. In IEEE International Conference on Communication Systems & Networks (COSMSNETS). IEEE.
  29. Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning 8, 3-4 (1992), 279-292.
  30. Christopher John Cornish Hellaby Watkins. 1989. Learning from delayed rewards. (1989).
  31. Richard R Weber and Gideon Weiss. 1990. On an index policy for restless bandits. J. Appl. Probab. 27, 3 (1990), 637-648.
  32. Peter Whittle. 1988. Restless bandits: Activity allocation in a changing world. J. Appl. Probab. 25, A (1988), 287-298.