Academia.eduAcademia.edu

Model Transparency

description17 papers
group0 followers
lightbulbAbout this topic
Model transparency refers to the degree to which the internal workings and decision-making processes of a model, particularly in machine learning and artificial intelligence, are understandable and interpretable by humans. It emphasizes clarity in how models operate, enabling stakeholders to assess their reliability, fairness, and accountability.
lightbulbAbout this topic
Model transparency refers to the degree to which the internal workings and decision-making processes of a model, particularly in machine learning and artificial intelligence, are understandable and interpretable by humans. It emphasizes clarity in how models operate, enabling stakeholders to assess their reliability, fairness, and accountability.

Key research themes

1. How can standardized frameworks define and assess transparency levels for diverse stakeholders in autonomous and AI systems?

This research area focuses on developing measurable, testable standards to specify and assess transparency in autonomous systems, addressing the varying needs of different stakeholders such as users, regulators, and investigators. Establishing such frameworks matters to ensure accountability, trust, and safety by making AI systems understandable and their decisions explicable across multiple application contexts.

Key finding: Introduces IEEE P7001 draft standard as a structured approach defining testable transparency levels tailored to five stakeholder groups (users, public/bystanders, safety agencies, investigators, and lawyers). The standard... Read more
Key finding: Develops the concept of Transparency by Design (TbD) integrating contextual, technical, informational, and stakeholder-sensitive principles into AI system development. Proposes nine principles inspired by privacy-by-design... Read more
Key finding: Proposes Method Cards as prescriptive documentation artifacts that go beyond descriptive transparency by providing actionable guidance to ML engineers on model reproduction, design rationales, and mitigation strategies for... Read more

2. What are the epistemic and practical challenges underlying transparency in complex AI-driven simulations and computational systems?

This research theme investigates the nature of opacity and transparency in complex computational systems such as AI, computer simulations, and big data applications. It explores the conceptual limits of knowledge and understanding about system internals, addresses the multiple layers of opacity, and evaluates how partial or instrumental transparency can be attained to support scientific explanations, artifact detection, and trustworthy deployment.

Key finding: Reconceptualizes opacity beyond Humphreys’ computational steps inaccessible by hand, defining opacity as the disposition to resist epistemic access including forms of knowledge and understanding. It distinguishes different... Read more
Key finding: Analyzes transparency as consisting of three forms—functional transparency (algorithmic functioning), structural transparency (implementation in code), and run transparency (actual execution on hardware and data)—to address... Read more

3. How do methodological and user-centered approaches advance transparency in interpretability, data documentation, and explanation of AI models?

This area studies practical frameworks and methodologies to improve transparency through structured documentation, interpretability techniques, and user-centric explanations. It focuses on additive versus non-additive model explanations, transparent documentation of datasets and processes, and enhanced user communication to bridge the gap between technical AI design and interpretability by diverse stakeholders.

Key finding: Compares multiple additive explanation methods (partial dependence, Shapley explanations, distilled additive explanations, gradient-based explanations) for black-box models, revealing that distilled additive explanations... Read more
Key finding: Explores transparency as a situated and evolving process in qualitative research methodology, emphasizing 'methodological data' as reflexive artifacts that complicate simplistic accounts of transparency. Argues that... Read more
Key finding: Proposes Method Cards as a novel documentation tool for machine learning that combines descriptive information with prescriptive guidance. These cards facilitate model reproduction, understand design choices, and provide... Read more

All papers in Model Transparency

In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the... more
Download research papers for free!