Academia.eduAcademia.edu

Explainable Artificial Intelligence

description115 papers
group31 followers
lightbulbAbout this topic
Explainable Artificial Intelligence (XAI) refers to methods and techniques in artificial intelligence that make the outputs of AI systems understandable to humans. It aims to provide transparency in decision-making processes, enabling users to comprehend, trust, and effectively manage AI systems by elucidating how and why specific decisions or predictions are made.
lightbulbAbout this topic
Explainable Artificial Intelligence (XAI) refers to methods and techniques in artificial intelligence that make the outputs of AI systems understandable to humans. It aims to provide transparency in decision-making processes, enabling users to comprehend, trust, and effectively manage AI systems by elucidating how and why specific decisions or predictions are made.

Key research themes

1. What are the foundational conceptualizations of explainability and how do different definitions impact the development and evaluation of XAI methods?

This theme focuses on clarifying what 'explanation' and 'explainability' truly mean across AI research fields. It matters because inconsistent or vague definitions hinder the development of effective XAI systems, their evaluation, and their adoption in high-stakes domains demanding trust, accountability, and transparency. Clarifying foundational conceptualizations serves as a prerequisite for progress in algorithm design, regulatory compliance, and human-centered explanations.

Key finding: This paper offers a taxonomy of explainability notions—opaque, interpretable, and comprehensible systems—and proposes a fourth category, truly explainable systems that produce automated explanations without human... Read more
Key finding: By contrasting machine learning explanations (approximate models that serve as 'do-it-yourself kits') with philosophical and social science perspectives on explanation, the paper reveals that many XAI approaches produce local... Read more
by Giulia Vilone and 
1 more
Key finding: This systematic review clusters theoretical notions of explainability and methods for their evaluation, revealing a hierarchy of explanation requirements such as understandability by end-users and the provision of actionable... Read more
Key finding: The paper proposes a unified theoretical framework differentiating 'explanation' (an interpretable mapping from complex to simpler objects) and 'interpretation' (the actual comprehensible output), providing systematic... Read more
Key finding: Tracing the historical evolution of explainable AI from early expert systems using abductive reasoning to modern opaque deep learning architectures, this paper identifies the representational and computational challenges in... Read more

2. How can explainability methods be designed and evaluated to meet the needs of diverse stakeholders and practical application domains?

This research theme emphasizes designing XAI systems with considerations of target users’ backgrounds and needs, domain-specific constraints, and human cognitive processes. It matters because a one-size-fits-all explanation risks being ineffective or misleading. Evaluating explanations via both human-centered and objective metrics is critical for trust, accountability, and regulatory compliance across sectors like healthcare, finance, and autonomous systems.

Key finding: The paper critiques current monolithic explanation approaches and proposes classifying users into three groups—developers, domain experts, and lay users. Tailoring explanations based on this classification and modeling... Read more
Key finding: This comprehensive review maps the landscape of XAI methods across various domains and tasks, revealing that most XAI methods are domain-agnostic with healthcare being the most influential domain. The study highlights the... Read more
Key finding: Focusing on healthcare applications, the paper investigates barriers including data bias, privacy restrictions, and critical trust deficits rooted in black-box model opacity. It discusses XAI approaches—transparency vs.... Read more
Key finding: This work introduces AI Explainability 360, an open-source Python toolkit encompassing diverse state-of-the-art explainability methods and evaluation metrics designed to address varied stakeholder needs—including developers,... Read more
Key finding: Drawing from cognitive psychology's perceptual process, the paper advocates for ‘relatable explanations’ that align with how humans select, organize, and interpret information. Through a novel model RexNet, incorporating... Read more

3. What are the emerging challenges and future directions in Explainable Artificial Intelligence considering multidisciplinary perspectives and real-world deployment?

This theme addresses the open problems and interdisciplinary research directions essential for moving XAI from theoretical frameworks to impactful real-world applications. It matters as the complexity of AI systems, coupled with ethical, legal, social, and technical concerns, demands integrative approaches ensuring trust, fairness, robustness, and user comprehension, particularly in safety-critical domains such as healthcare and autonomous systems.

Key finding: This manifesto collates twenty-seven open problems in XAI spanning nine categories including trust, robustness, interactivity, and domain adaptability. It emphasizes multidisciplinary collaboration to bridge gaps between... Read more
Key finding: The survey highlights the unique interpretability challenges posed by deep reinforcement learning (DRL) systems acting autonomously in complex sequential decision-making settings. It reviews state-of-the-art methods and... Read more
Key finding: This work elaborates on the necessity of integrating human-in-the-loop approaches to enhance explanation quality, emphasizing that current state-of-the-art XAI techniques may fail to achieve full understandability from a... Read more
Key finding: Addressing explainability within responsible AI, the article stresses the imperative of aligning AI systems with ethical principles—such as fairness, privacy, and transparency—across organizational and societal levels. It... Read more
Key finding: This paper offers a grounded perspective on the inherent limitations of explainable AI, rooted in the scientific theory framework. It argues that despite advances, explainability may never fully eliminate black-box... Read more

All papers in Explainable Artificial Intelligence

Chronic kidney disease (CKD) is a major worldwide health problem, affecting a large proportion of the world's population and leading to higher morbidity and death rates. The early stages of CKD sometimes present without visible symptoms,... more
Thyroid disease classification is a critical challenge in medical diagnostics, requiring accurate differentiation between hyperthyroidism, hypothyroidism, and normal thyroid function. This study introduces an advanced machine learning... more
Federated Learning (FL) provides both model performance and data privacy for machine learning tasks where samples or features are distributed among different parties. In the training process of FL, no party has a global view of data... more
The Common Vulnerabilities and Exposures (CVE) database is the largest publicly available source of structured data on software and hardware vulnerability. In this work, we analyze the CVE database in the context of IoT device and system... more
Artificial intelligence (AI) has achieved unprecedented advancements in recent years, driven largely by deep learning architectures and large-scale data analytics (LeCun, Bengio, & Hinton, 2015). Despite these achievements, there remains... more
In the competitive world of digital banking, predicting and reducing customer churn is essential for long-term growth. Traditional predictive models can forecast churn quite accurately, but their lack of transparency is a problem in... more
Educational Data Mining (EDM) supports early detection of learning difficulties by predicting student performance. However, machine learning models often operate as black boxes. Explainable Artificial Intelligence (XAI) helps to explain... more
An algorithmic culture has emerged in which decisions affecting our daily life increasingly depend on automated systems such as machine learning. Developers of those systems strive for more accuracy, while at the same time demands for... more
Introduction Artificial Intelligence (AI) is only as powerful as the data that fuels it, and this book is your comprehensive guide to understanding the critical data infrastructure that makes AI work. AI has become a transformative force... more
Over 64 million people worldwide are affected by heart failure (HF), a condition that significantly raises mortality and medical expenses. In this study, we explore the potential of retinal optical coherence tomography (OCT) features as... more
Chronic Kidney Disease (CKD) is a progressive condition that requires accurate diagnosis and staging for effective clinical management. Conventional CKD diagnosis relies on estimated Glomerular Filtration Rate (eGFR), a measure of kidney... more
Leyes universales I.A. / Carta Magna I.A. / Aion Lumen / Principios Fundamentales I.A. / Leyes ASIMOV. / Lógica Deóntica
Alpay Algebra is introduced as a self-contained axiomatic framework with the ambition of serving as a universal foundation for mathematics. Developed in the spirit of Bourbaki's structural paradigm and Mac Lane's emphasis on form and... more
In this research paper, I argue that, with the new paradigm of deep learning, we should further incorporate evaluation, accountability and acceptance within the interface human-agent. As we shall see, this leads us to move from... more
This study aims to explore new therapeutic opportunities for histone deacetylase (HDAC) inhibitors by leveraging drug repurposing approaches and analyzing their bioactivity and molecular fingerprints. The methodology includes... more
In order to get a correct diagnosis and choose the best treatment options before it becomes deadly, early detection and classification of pancreatic tumours are essential. Grading can be a tedious and time-consuming process for experts... more
Fruit research now has reached a new dimension thanks to machine learning, which produces actionable insights for further exploration by practitioners in the agricultural domain. In order to automatically categorize the edibility of date... more
This Research Full Paper focuses on predicting learning performance using machine learning algorithms and interpreting the results using Explainable Machine Learning (EML) techniques. The study compared a comprehensive set of machine... more
This study explores vulnerability management and risk assessment through data analysis and machine learning techniques. In the first phase, a preliminary analysis was conducted to understand the distribution of vulnerabilities based on... more
Purpose: The article illustrates that data integration will play a pivotal role in value-based healthcare, amplifying the power of the best cost management and improving outcomes. It brings into focus the fundamental systems,... more
The XAI concept was launched by the DARPA in 2016 in the context of model learning from data with deep learning methods. Although the machine learning community quickly took up on the topic, other communities have also included... more
The XAI concept was launched by the DARPA in 2016 in the context of model learning from data with deep learning methods. Although the machine learning community quickly took up on the topic, other communities have also included... more
Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box")... more
Chronic kidney disease (CKD) is a major worldwide health problem, affecting a large proportion of the world’s population and leading to higher morbidity and death rates. The early stages of CKD sometimes present without visible symptoms,... more
This study aims to explore new therapeutic opportunities for histone deacetylase (HDAC) inhibitors by leveraging drug repurposing approaches and analyzing their bioactivity and molecular fingerprints. The methodology includes... more
Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and... more
Explainable Artificial Intelligence (XAI) is a rapidly evolving field aimed at making AI systems more interpretable and transparent to human users. As AI technologies become increasingly integrated into critical sectors such as... more
Patients with chronic kidney disease (CKD) face a high risk of cardiovascular death, yet accurately predicting this risk remains challenging. This study aims to develop an interpretable machine learning (ML) model to predict 10-year... more
The communication between robots/agents and humans is a challenge, since humans are typically not capable of understanding the agent's state of mind. To overcome this challenge, this paper relies on recent advances in the domain of... more
Motivated by the apparent societal need to design complex autonomous systems whose decisions and actions are humanly intelligible, the study of explainable artificial intelligence, and with it, research on explainable autonomous agents... more
Iron (Fe) chelating medicines and Histone deacetylase (HDAC) inhibitors are two therapy options for hereditary Friedreich's Ataxia that have been shown to improve clinical results (FA). Fe chelation molecules can minimize the quantity of... more
Artificial Intelligence has now taken a full-fledged role in healthcare and has started driving innovations not only in diagnostics and treatment planning but also in patient monitoring and operational efficiency. This will enable complex... more
Today, with the development of artificial intelligence, its application in different areas, including production and operations, has expanded. Explainable artificial intelligence (XAI) is a new research topic that has emerged with the... more
The field of explainable artificial intelligence (XAI) aims to explain the decisions of DNNs. Complete DNN explanations accurately reflect the inner workings of the DNN while interpretable explanations are easy for humans to understand.... more
Background: This study utilized advanced Artificial Intelligence (AI) techniques to develop predictive models for legume crop yields in the context of climate change scenarios. With the escalating challenges posed by climate change,... more
Artificial Intelligence is making significant inroads into various aspects of business and life bringing the transformation in many ways. The convolution of technology in finance is often called FINTECH rapidly growing area of... more
Explainable AI (XAI) has emerged as a critical field in artificial intelligence, addressing the "black box" nature of complex machine learning models. This article explores the importance of transparency in AI decision-making, the... more
The rapid advancement of artificial intelligence (AI) has led to its widespread adoption across various domains. One of the most important challenges faced by AI adoption is to justify the outcome of the AI model. In response, explainable... more
Artificial Intelligence is making significant inroads into various aspects of business and life bringing the transformation in many ways. The convolution of technology in finance is often called FINTECH rapidly growing area of... more
One of the most critical aspects of a software piece is its vulnerabilities. Regardless of the years of experience, type of project, or the size of the team, it is impossible to avoid introducing vulnerabilities while developing or... more
Edge devices that operate in real-world environments are subjected to unpredictable conditions caused by environmental forces such as wind and uneven surfaces. Since most edge systems such as autonomous vehicles exhibit dynamic... more
An Advisory System using Sentiment Analysis (ASSA) is proposed to analyze student reviews and assist faculty and university officials in addressing challenging areas of teaching and learning. In addition to analyzing student comments to... more
In recent years, deep learning has become prevalent to solve applications from multiple domains. Convolutional Neural Networks (CNNs) particularly have demonstrated state of the art performance for the task of image classification.... more
Recently, the eXplainable AI (XAI) research community has focused on developing methods making Machine Learning (ML) predictors more interpretable and explainable. Unfortunately, researchers are struggling to converge towards an... more
With the recent successes of black-box models in Artificial Intelligence (AI) and the growing interactions between humans and AIs, explainability issues have risen. In this article, in the context of high-stake applications, we propose an... more
This panel will discuss the problems of bias and fairness in organizational use of AI algorithms. The panel will first put forth key issues regarding biases that arise when AI algorithms are applied to organizational processes. We will... more
AI tools are becoming more commonly used in a variety of application domains. In this paper, we describe a system named FATE that combines state of the art AI tools. The goal of the FATE system is decision support with use of ongoing... more
A governance framework for algorithmic accountability and transparency Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences... more
Download research papers for free!