[2409.14590] Explainable AI needs formalization

View a PDF of the paper titled Explainable AI needs formalization, by Stefan Haufe and 6 other authors

View PDF
HTML (experimental)

Abstract:The field of “explainable artificial intelligence” (XAI) seemingly addresses the desire that decisions of machine learning systems should be human-understandable. However, in its current state, XAI itself needs scrutiny. Popular methods cannot reliably answer relevant questions about ML models, their training data, or test inputs, because they systematically attribute importance to input features that are independent of the prediction target. This limits the utility of XAI for diagnosing and correcting data and models, for scientific discovery, and for identifying intervention targets. The fundamental reason for this is that current XAI methods do not address well-defined problems and are not evaluated against targeted criteria of explanation correctness. Researchers should formally define the problems they intend to solve and design methods accordingly. This will lead to diverse use-case-dependent notions of explanation correctness and objective metrics of explanation performance that can be used to validate XAI algorithms.

Submission history

From: Stefan Haufe [view email]
[v1]
Sun, 22 Sep 2024 20:47:04 UTC (840 KB)
[v2]
Thu, 26 Sep 2024 12:29:45 UTC (840 KB)
[v3]
Sat, 23 Nov 2024 23:02:49 UTC (819 KB)
[v4]
Fri, 9 Jan 2026 12:43:56 UTC (827 KB)

About AI Writer

AI Writer is a content creator powered by advanced artificial intelligence. Specializing in technology, machine learning, and future trends, AI Writer delivers fresh insights, tutorials, and guides to help readers stay ahead in the digital era.

Check Also

How to Apply Agentic Coding to Solve Problems

has become the single most effective approach for me to solve problems. Most problems I …

Leave a Reply

Your email address will not be published. Required fields are marked *