[Article 2/2] Neurodata, personal data like no other

Rédigé par Régis Chatellier

 - 

03 October 2025


Neurodata, personal data unlike any other


The classification of personal data is not a matter of debate when it comes to neurodata. Collected directly from the brain or nervous system of the individuals concerned, this data is covered by the GDPR insofar as it allows a person to be identified directly or indirectly.

However, although particularly sensitive in their use, these data derived from neurotechnologies are not included in Article 9 of the GDPR, which lists “special categories of personal data”, commonly referred to as “sensitive data”. While this article refers to “the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, and data concerning health”, neurodata is not explicitly mentioned. Depending on the context in which it is collected and used, neurodata is health data, “relating to the physical or mental health of a natural person”, particularly in all cases relating to healthcare. Similarly, neurodata can constitute biometric data when used to identify a person, as in the case of the French startup Yneuro, which offers an authentication system based on the “electronic signature of the brain” using neural sensors integrated into audio headsets. The question may arise when it comes to using neurotechnologies in the field of education, or to improve the experience in a video game, for example.

Similarly, as noted by the Berlin Group, Article 4 of the GDPR introduces the concept of mental identity into the definition of personal data, without “no explicit definition [is] provided in the GDPR regarding the limits or boundaries of what may constitute mental identity, given other complex and culturally rooted philosophical contexts” (p. 10). It also refers to Article 3 of the Charter of Fundamental Rights of the European Union, which states that “everyone has the right to physical and mental integrity”.

The collection and use of neurodata, insofar as they aim to analyze the brain and in some cases and in certain ways to ‘read thoughts’ or to alter how people think or react in specific circumstances raise issues and debates regarding the ethical nature of neurotechnologies, as well as the adequacy of the legal framework as it exists in 2025 to address the various use cases of such personal data.

In most cases, the GDPR applies. However, the Berlin Group notes certain specificities regarding neurodata that raise particular challenges for the proper application of the regulation. In particular, it highlights the obligations of transparency and information: the highly exploratory and novel nature of these technologies could raise the question of the level of information to be provided to data subjects, ranging from strictly necessary information such as details on collection, storage, retention periods, and purposes to potentially additional elements on the possible consequences associated with the use of these technologies, particularly in terms of risks. The principle of fairness is also subject to debate, notably in light of individuals’ reasonable expectations regarding the effects produced by neurotechnologies and their potential consequences. The Berlin Group further notes that inaccuracies in the data could directly impact individuals’ rights and freedoms, in particular their right to integrity. Similarly, the principles of proportionality and necessity should be assessed while taking into account the fundamental rights at stake.

Most often, the processing of neurodata could rely on consent, particularly when it concerns health or biometric data. However, « the complexity of neurotechnology and the data it processes creates intrinsic challenges for consent as a legal basis for processing », given that consent must be free, informed, and specific. In the workplace context, for example, consent cannot be considered free, as the CNIL recalls in its framework on human resources management (p. 4): « Employees are only very rarely in a position to freely give, refuse, or withdraw their consent, given the dependency inherent in the employer/employee relationship. They can only give valid consent when accepting or rejecting a proposal has no consequences on their situation ». For uses in the « private » or leisure sphere, the informed nature of consent may be difficult to achieve, in a context where the technologies are complex and still under development. Providing complete information could, in such cases, become overly complex for non-specialists, with the following challenge (p. 19): « The technology itself is incredibly complex, and do people without advanced technical expertise truly have the ability to provide concise, transparent, intelligible, and easily accessible information about the data collected, the way it is processed, and all the risks and benefits in such a context? And how can we be sure that the individuals concerned genuinely understand the information provided to them regarding data processing? »

Security risks are particularly critical in neurotechnologies. In an article published in 2016, researchers described this threat as « brainjacking », that is, the ability of attackers to « maliciously exert control over brain implants », through two possible modes: blind attacks, which require no specific knowledge of the patient, and targeted attacks, which rely on patient-specific information. « Blind attacks include halting stimulation, draining the implant’s batteries, causing tissue damage, and stealing information. Targeted attacks include altering motor functions, modifying impulse control, changing emotions or affect, inducing pain, and manipulating the reward system ». As early as 2017, in the United States, the Food and Drug Administration (FDA), the federal agency responsible for patient protection and product approval, ordered the recall of pacemakers that contained security vulnerabilities. These flaws could allow « an unauthorized person to access the device » and to « modify the pacemaker’s commands, which could harm the patient by rapidly depleting the battery or imposing an inappropriate heart rhythm ».

Among the ongoing debates on neurotechnologies, the need to establish new legislation, soft law instruments, or regulatory mechanisms regularly emerges in discussions. Several initiatives can already be identified over the past decade.

 

The controversial call for neurorights

The Neurorights Foundation, a U.S.-based non-profit organization founded in 2017 with the aim of ‘promoting the ethical use of neurotechnology for the benefit of society,’ advocates for the creation of new specific rights:

  • The right to mental privacy, which seeks to prevent neural activity from being decoded without the individual’s consent;
  • The right to mental identity, which aims to protect against technologies that alter consciousness or personality;
  • The right to free will, or cognitive liberty, which seeks to prevent neurotechnologies from impairing decision-making capacities;
  • The right to fair access to neuroenhancement, insofar as neurotechnologies may enhance or ‘augment’ the abilities of those who can afford them;
  • The right not to be subjected to neurodiscrimination, particularly in light of biases embedded in the algorithms used for processing neurodata.

 

Chile was the first country in the world to adopt these ‘neurorights,’ incorporating them into its Constitution as early as 2021. At the time, UNESCO noted the premature nature of integrating such rights, « given the state of development of neurotechnologies, which still have a limited capacity to act on the human brain », while also stressing that « progress in the field of neurotechnologies continues to accelerate ». A psychologist and professor at the University of Valparaíso observed that « while the development of these technologies is not a problem in itself, it may nonetheless cross dangerous boundaries in the absence of regulation [...] it would be naïve to think that advances [in neurotechnologies] will not result in commercial applications ». He specifically warned of the risk of integrating « neuro-cookies » that « would make it possible to identify a consumer’s preferences and, ultimately, implant new ones ». As early as August 2023, the Supreme Court of Chile ordered the U.S. company Emotiv which develops headsets and software for collecting and processing electroencephalography data to erase data it had collected from a former Chilean senator.

Although there is broad agreement that the processing of such data is particularly sensitive and that certain limits must be set, the necessity of these new rights remains contested. Nita Farahany, professor at Duke University in the United States and author of The Battle for Your Brain – Defending the Right to Think Freely in the Age of Neurotechnology, argues that new rights are needed, particularly to protect « cognitive liberty » and to guard against technologies that could predict cognitive states, even when they are not measured from neurodata. For its part, the European Parliamentary Research Service (EPRS), in a report published in July 2024 entitled The Protection of Mental Privacy in the Area of Neuroscience – Societal, Legal and Ethical Challenges, stated that how such neurorights would be regulated « is not clear ». It identified certain issues linked to the perception and use of neurotechnologies promoted by the Neurorights Foundation. Chief among these is a form of « neuro-enchantment », whereby technologies labeled with the prefix « neuro » acquire strong persuasive power, fueling socio-technical imaginaries that are « optimistic and boundless » in their portrayal of neurotechnologies’ capabilities. This, in turn, leads to « neuro-essentialism », a reification of neurotechnologies. Similar critiques, the report notes, could equally have been made at earlier moments about technologies such as big data, blockchain, or now artificial intelligence. The report further asserts that its initial analyses « showed that the rights enshrined in the European Convention on Human Rights and the Charter of Fundamental Rights of the European Union could provide an appropriate legal framework ». It cautions, however, that « if new human and/or fundamental rights were introduced, existing rights could be restricted and new regulatory gaps might emerge », particularly in a context where « there is a lack of established case law concerning new rights ». The EPRS therefore concludes that the challenges posed by neurotechnologies should rather be addressed through existing regulations, such as the GDPR or the AI Act.

 

Recommendations for « Responsible » and « Ethical » Development

Alongside these debates on the adoption of new specific rights, an international organization such as the OECD published as early as 2019 its nine recommendations on « responsible innovation in the field of neurotechnologies », a non-binding legal instrument aimed at « helping public authorities and innovators anticipate and address ethical, legal, and social challenges while promoting innovation in this field ». Without calling for the establishment of new specific rights, the recommendations set out nine principles, including the promotion of responsible innovation, notably « taking into account human rights and societal values, in particular respect for privacy, cognitive liberty, and individual autonomy », the promotion of inclusivity, the protection of a culture of responsibility and trust, as well as the anticipation of potential unintended and/or abusive uses of neurotechnologies. Building on these recommendations, the European Brain Council (a neurotechnology stakeholder network founded in 2002) published its European Charter for the Responsible Development of Neurotechnologies.

In France, a charter for French neurotechnologies was inaugurated on November 17, 2022, led by the Ministry in charge of Higher Education and Research. It should also be noted that, with the Bioethics Law of August 2021, prohibitions are formulated in the Public Health Code (Article L1151-4): « Acts, processes, techniques, methods, and equipment that modify brain activity and present a serious danger or suspected serious danger to human health may be prohibited by decree, after advice from the High Authority of Health », as well as in the Civil Code (Article 16-14): « Brain imaging techniques may only be used for medical or scientific research purposes or within the framework of judicial expertise, excluding, in this context, functional brain imaging », and only with the « express consent of the individual ».

The Council of Europe, for its part, published an « Expert Report on the Implications of the Use of Neurotechnology and Neural Data on Privacy and Data Protection from the Perspective of Convention 108+ », in which it considers that « the development of neurotechnologies based on the processing of neural data should be grounded in the principles of Convention 108+ when such data are considered personal data ». However, given the « risk of misuse of neurotechnologies », it recommends that states implement a « risk-based classification of these technologies, similar to that developed for artificial intelligence technologies », where « unacceptable risks (red lines) could be included as a policy option ». The report echoes the proposal by Marcello Ienca and Gianclaudio Malgieri to explore the possibility of developing and implementing a « Mental Data Protection Impact Assessment » (MDPIA), a « specific data protection impact assessment aimed at better evaluating and mitigating risks to fundamental rights and freedoms related to the processing of mental health-related data ». « If the processing of neural data is considered to present a high risk by default, the data controller should be required to: describe the processing (including a description of the technology logic); carry out a balancing test based on the necessity and proportionality of data processing relative to the intended purposes; assess the actual risks to fundamental rights and freedoms; and propose appropriate measures to address and mitigate these risks ». This MDPIA could involve ‘an audit of the technological components of the processing’ (for example, AI-based processing), a thorough evaluation, and a potential reconsideration of the algorithm to determine whether certain risks can be mitigated « by design ».

Recommendations on neurotechnologies can also be of an « ethical » nature, as exemplified by the ongoing UNESCO project, discussed in May 2025 and expected to conclude in November 2025, on the « Recommendation on the Ethics of Neurotechnologies », which is fairly long and detailed. These recommendations aim to adopt a human-centered approach, based on principles of « beneficence, proportionality, and safety », « autonomy and freedom of thought », « data protection », « non-discrimination and inclusion », « accountability », « justice », and the « best interests of the child ». The recommendations specifically call on « Member States, private actors, and international institutions to actively support high-quality research on responsible neurotechnologies, as well as their development, deployment, and use in the public interest », and to make investments that « promote human health and well-being ». More broadly, UNESCO’s project calls for « global, transparent, and multi-stakeholder, intersectoral and multidisciplinary coordination », particularly to carry out impact assessments on human rights, economic and social implications, privacy, as well as benefit-risk evaluations.

 

Ongoing and Upcoming Work

The regulation of the collection and use of brain data, while already addressed in Europe through the existing legal framework relating to health, fundamental rights, personal data (GDPR), and now artificial intelligence, continues to raise significant questions and debates regarding how to manage these neurotechnologies.

On the one hand, most organizations recognize, and even seek to promote, the development of these technologies due to the new opportunities they can offer in the health sector, in particular, while also warning of the inherent risks associated with a lack of regulation and ethical frameworks for uses in the private, economically driven sector.

Beyond this, it is the overall understanding of the brain, consciousness, and thought that raises questions for all stakeholders, particularly in an international context composed of diverse philosophical and cultural traditions. As Laure Tabouy, Researcher, Lecturer, and Project Manager in Neuroethics at Aix-Marseille University, notes, « the concept of « mental privacy » is […] not easy to define, both neuroscientifically and philosophically », Ethical concerns vary from one region of the world to another, but according to the researcher, « given the rapid deployment of these neurotechnologies and the international nature of the markets, it is necessary to coordinate at multiple geographic levels (national, European, and international) within a limited timeframe ». The Berlin Group, in its recommendations (p. 37), agrees on this need to undertake work addressing specific challenges, calling for a deeper examination of the intersection between the human right to dignity and neurotechnologies, in order to better understand how and when these technologies can be compatible, as well as a « comprehension of specific data protection regimes to support an effective global approach ».


Back to Article 1: Neurotechnologies: Conquering the Brain

 


Illustration : Adobe Stock


Article rédigé par Régis Chatellier , Chargé des études prospectives