Key research themes
1. How can standardized frameworks define and assess transparency levels for diverse stakeholders in autonomous and AI systems?
This research area focuses on developing measurable, testable standards to specify and assess transparency in autonomous systems, addressing the varying needs of different stakeholders such as users, regulators, and investigators. Establishing such frameworks matters to ensure accountability, trust, and safety by making AI systems understandable and their decisions explicable across multiple application contexts.
2. What are the epistemic and practical challenges underlying transparency in complex AI-driven simulations and computational systems?
This research theme investigates the nature of opacity and transparency in complex computational systems such as AI, computer simulations, and big data applications. It explores the conceptual limits of knowledge and understanding about system internals, addresses the multiple layers of opacity, and evaluates how partial or instrumental transparency can be attained to support scientific explanations, artifact detection, and trustworthy deployment.
3. How do methodological and user-centered approaches advance transparency in interpretability, data documentation, and explanation of AI models?
This area studies practical frameworks and methodologies to improve transparency through structured documentation, interpretability techniques, and user-centric explanations. It focuses on additive versus non-additive model explanations, transparent documentation of datasets and processes, and enhanced user communication to bridge the gap between technical AI design and interpretability by diverse stakeholders.