Academia.eduAcademia.edu

Outline

Explainable Agents as Static Web Pages: UAV Simulation Example

2020, Lecture Notes in Computer Science

https://doi.org/10.1007/978-3-030-51924-7_9

Abstract

Motivated by the apparent societal need to design complex autonomous systems whose decisions and actions are humanly intelligible, the study of explainable artificial intelligence, and with it, research on explainable autonomous agents has gained increased attention from the research community. One important objective of research on explainable agents is the evaluation of explanation approaches in humancomputer interaction studies. In this demonstration paper, we present a way to facilitate such studies by implementing explainable agents and multi-agent systems that i) can be deployed as static files, not requiring the execution of server-side code, which minimizes administration and operation overhead, and ii) can be embedded into web front ends and other JavaScript-enabled user interfaces, hence increasing the ability to reach a broad range of users. We then demonstrate the approach with the help of an application that was designed to assess the effect of different explainability approaches on the human intelligibility of an unmanned aerial vehicle simulation.

References (13)

  1. Aguinis, H., Bradley, K.J.: Best practice recommendations for designing and imple- menting experimental vignette methodology studies. Organ. Res. Methods 17(4), 351-371 (2014)
  2. Anjomshoae, S., Främling, K., Najjar, A.: Explanations of black-box model pre- dictions by contextual importance and utility. In: Calvaresi, D., Najjar, A., Schu- macher, M., Främling, K. (eds.) EXTRAAMAS 2019. LNCS (LNAI), vol. 11763, pp. 95-109. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30391-4 6
  3. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision- making and a "right to explanation". AI Mag. 38(3), 50-57 (2017)
  4. Johnston, A.B., Burnett, D.C.: WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web. Digital Codex LLC, St. Louis, MO, USA (2012)
  5. Kampik, T., Nieves, J.C.: JS-son-A minimal JavaScript BDI agent library. In: 7th International Workshop on Engineering Multi-Agent Systems (EMAS 2019) (2019)
  6. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1-38 (2019)
  7. Mualla, Y., Bai, W., Galland, S., Nicolle, C.: Comparison of agent-based simulation frameworks for unmanned aerial transportation applications. Procedia Comput. Sci. 130, 791-796 (2018)
  8. Mualla, Y., et al.: Agent-based simulation of unmanned aerial vehicles in civilian applications: a systematic literature review and research directions. Future Gener. Comput. Syst. 100, 344-364 (2019)
  9. Mualla, Y., Najjar, A., Kampik, T., Tchappi, I.H., Galland, S., Nicolle, C.: Towards explainability for a civilian UAV fleet management using an agent-based approach (2019). arXiv:1909.10090 cs.AI
  10. Mualla, Y., Tchappi, I.H., Najjar, A., Kampik, T., Galland, S., Nicolle, C.: Human- agent explainability: an experimental case study on the filtering of explanations. In: 12th International Conference on Agents and Artificial Intelligence (2020)
  11. Preece, A.: Asking 'Why' in AI: explainability of intelligent systems-perspectives and challenges. Intell. Syst. Account. Finance Manag. 25(2), 63-72 (2018)
  12. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agents Multi-Agent Syst. 33, 1-33 (2019)
  13. Wooldridge, M., Jennings, N.R.: Intelligent agents: theory and practice. Knowl. Eng. Rev. 10(2), 115-152 (1995)