Key research themes
1. How can philosophical and social science perspectives improve explanations in AI beyond simplified approximations?
Research on explainable AI (xAI) primarily produces simplified models approximating complex decision functions but often lacks alignment with philosophical and social science theories of explanation. This theme investigates how integrating insights from philosophy, cognitive science, and sociology about contrastive, selective, and interactive explanations can create more meaningful, trustworthy, and contestable AI explanations suited for diverse stakeholders.
2. What formal logical frameworks can unify knowledge and belief reasoning for agents, and how do they address knowledge representation in multi-agent contexts?
This research explores epistemic logic and argumentation frameworks to formally characterize knowledge and belief among agents, enabling reasoning about information states, higher-order knowledge, and dialogues in multi-agent systems. It investigates formal languages, semantics, and proof systems that describe how agents acquire, share, and contest knowledge, as well as how argumentation can represent defeasible and non-monotonic reasoning within resource-constrained and interactive settings.
3. How can hybrid and multi-level knowledge representation frameworks integrate symbolic, sub-symbolic, and defeasible reasoning for AI systems?
This area studies frameworks that combine multiple representation approaches (symbolic, sub-symbolic, defeasible, ontological) to more effectively model real-world knowledge and reasoning, including uncertainty, exceptions, multi-level classifications, and human-like cognitive processes. It analyses formal languages, ontologies, and cognitive architectures that address scalability, expressivity, and normative constraints while supporting complex reasoning tasks in AI systems.