The explainability of AI in the era of deep learning
Rédigé par Nicolas Berkouk, Mehdi Arfaoui et Romain Pialat
-
08 July 2025
Chatbots, autonomous cars, personalized medicine, fraud detection, the revival of Artificial Intelligence that took place in the mid-2010s with the success of deep learning techniques has made it possible to automate tasks that were previously considered very difficult to handle with computers. This increased degree of automation, combined with ever-higher performance, makes the question of mastering and understanding these systems unavoidable.
A field of research quickly emerged to offer technical solutions to these questions: explainable AI (or XAI). The range of proposed methods is extremely vast, but it seems that after more than ten years of research, none of them has achieved consensus or sparked controversy. As the fields of application for AI systems expand, and as the legal framework for explainability is strengthened with new European regulations, the use of these techniques raises important questions at the interface of law, technology, and society.
In this series, we propose in a first article [1/3] to place the question of generating explanations for systems based on deep learning within the long history of artificial intelligence, in order to highlight how the success of this technology fundamentally renews the way we can access the logic of an AI's calculations. In a second article [2/3], we present the most popular techniques in this field and highlight their heterogeneity as well as, for some of them, their lack of robustness. This then leads us to the necessity of understanding how these techniques are produced, that is, to the study of the social and institutional context that enables their development. We conclude in a third article [3/3] by proposing the hypothesis that if this research field has not yet reached a status of consensual scientific production, it is because it functions as a borderland between the "core" of AI development and the public domain, and that this position subjects it to heterogeneous injunctions that do not encourage the development of commonly shared scientific foundations.
- Explaining AI systems, a problem renewed by the success of deep learning algorithms [1/3]
- Explainable AI techniques [2/3]
- Why take an interest in the sociology of xAI science? [3/3]