A new project on explainable artificial intelligence

Rédigé par Romain Pialat

 - 

31 juillet 2024


Artificial intelligence systems, and algorithms in general, are sometimes opaque, and their results can be complex to interpret. A young but prolific field of research is dedicated to their explicability. The aim of this project is to understand how this research is structured, based on mathematical analyses of the techniques used, crossed with quantitative and qualitative elements from the social sciences. As part of this study, the CNIL will use a database of scientific publications relating to the explicability of artificial intelligence, obtained via a search engine specialized in scientific literature.

 What is the purpose of this study?

Explainable AI, or simply xAI, is a scientific field developing methods and techniques for explaining the information, predictions or decisions generated by artificial intelligence systems. This explanation is necessary when using these systems in critical contexts (medicine, military, transport, ...). Since 2016 and DARPA's launch of the Explainable AI Program, there has been a sudden and massive appearance of scientific publications containing the term Explainable AI.

This discipline, which is still largely associated with computer science, does not enjoy a consensus, either in terms of the techniques used, the objective of the explanation, or what constitutes an explanation. This lack of consensus seems to be little or non-existent within the xAI field. The aim of this study is therefore to gain a better understanding of the underlying issues regulating the Explainable AI scene.

 

What data for what uses?

In order to draw up a typology of xAI techniques, and so as not to produce a study that is too quickly obsolete from a technical point of view, given the rapid evolution of this field of research, we are interested here in the social principles and mechanisms behind the organization and production of techniques. To this end, we aim to understand and identify regularities in the institutional, academic and social positions of xAI players.

A large database of around 12,000 thematic publications in this discipline is being exploited using the SemanticScholar search engine. This database is made up of paper titles, author names, and other characteristics inherent to a publication such as year of publication, journal or conference in which the publication was made, publication citations, etc.

We will therefore process all these data, as well as data relating to the professional lives of the people in our database, and publicly available on the Internet, such as :

  • The academic position.
  • The home university.
  • Previous publications or fields of research.

We will be carrying out similar studies in other, less recent fields of research, so as to have control databases with which to compare our results. For the moment, only one field of research is concerned, that of fairness in artificial intelligence.

 

How are people's rights respected?

The data processed during this project are the one obtained by querying SemanticScholar with the following request :

query = '"Model interpretability" | "Models interpretability" | "model explanations" | "models explanations" | "explanations of models" | "explaining models" | "Explainable Artificial Intelligence" | "XAI" | "explainable AI" | "interpretable AI" | "interpretable artificial intelligence"'

fields = "paperId,corpusId,url,title,venue,publicationVenue,year,authors,externalIds,abstract,referenceCount,citationCount,influentialCitationCount,isOpenAccess,openAccessPdf,fieldsOfStudy,s2FieldsOfStudy,publicationTypes,publicationDate,journal,citationStyles"
url=“http://api.semanticscholar.org/graph/v1/paper/search/bulk?query={query}&fields={fields}&year=1970-"

You have the right to access and obtain a copy of your data, to object to its processing, and to have it corrected or deleted. You also have the right to restrict the processing of your data. To do so, you can contact the CNIL's Digital Innovation Laboratory ([email protected] ) or the CNIL's Data Protection Officer (DPO) for any request to exercise your rights regarding this processing. The DPO's contact details are at the bottom of the page.

If, after contacting us, you feel that your "Data Protection" rights have not been respected, you may submit a complaint to your local Data Protection Authority.

 

How is this project managed?

This project falls within the scope of the public interest mission entrusted to the CNIL under the General Data Protection Regulation and the amended French Data Protection Act. It is part of the CNIL's mission to provide information, as defined in article 8.I.1 of the French Data Protection Act, as well as its mission to monitor developments in information technology, as defined in article 8.I.4.

Only members of the CNIL's Digital Innovation Laboratory (LINC) and Artificial Intelligence Department (SIA), in charge of this study, will have access to the personal data collected and processed as part of the experiment.

 

How long will the study last?

This project will end in September 2025. At the end of the project, the processed data will be deleted. It will be covered by several publications on the LINC website.  

Contact the CNIL DPO

 Electronically by following this link

By mail:

The Data Protection Officer,

CNIL 3 Place de Fontenoy,

TSA 80715

75334 PARIS CEDEX 07

France


Romain Pialat
Article rédigé par Romain Pialat , Ingénieur Recherche & Développement au LINC