Papers by José Uetanabara Júnior

Artificial Intelligence (AI) has gained prominence in recent years, being widely applied in acade... more Artificial Intelligence (AI) has gained prominence in recent years, being widely applied in academic and industrial contexts. Its popularization has raised several challenges, particularly the need to make AI models auditable. Explainable Artificial Intelligence (XAI) seeks to address this issue through methods that interpret the decisions of black-box models. Despite its progress, few studies integrate XAI into the software engineering cycle. At the same time, the European Union's AI Act (Regulation 2024/1689) requires extensive documentation for high-risk systems, often resulting in hundreds of pages of reports. To address this gap, this work proposes An Initial UML Approach Towards EU AI Act Compliance, introducing stereotypes, tagged values, and relationships for LIME-based explanations. By graphically representing critical XAI elements, the approach enhances traceability and auditability while providing partial coverage of AI Act requirements, serving as a structured complement to mandatory textual documentation. The proposal is illustrated through a case study involving a breast cancer diagnosis system.
Uploads
Papers by José Uetanabara Júnior