[1/3] Human supervision, hybrid decisions: what are the challenges ?
Rédigé par Charlotte Barot
-
16 October 2024As decision-support systems become more widespread, their use in critical decision-making contexts raises serious ethical and legal concerns. To mitigate the main risks identified in the field of decision-making, the law requires human intervention or integrated human oversight within the decision-making process, resulting in "hybrid" mechanisms that combine computational power with human judgment. In this series of articles, the LINC explores, based on scientific literature, two key obstacles to the effectiveness of such mechanisms : on the one hand, user trust biases toward the system, and on the other, the opacity of the system’s suggestions.
Human oversight is presented as an essential safeguard to ensure reliable and fair decisions, leading to hybrid decision-making procedures that combine human intervention with the use of algorithms. However, these hybrid procedures, which aim to merge the efficiency of decision-support systems with the human qualities of judgment, can only function if the human decision-maker is able to fully assess the output presented to them, an issue already highlighted by the CNIL in its 2017 ethical report “How can we ensure that humans remain in control ?”. In this literature review, the LINC explores the obstacles to implementing hybrid decision-making systems and the potential avenues identified for their improvement.
Data accumulation: the need for analysis
With the rise of big data, decision-support algorithms (or ADM, automated decision making) have become essential tools for addressing complex problems. These artificial intelligence (AI) systems generate rapid estimates on which decisions must be made, yet without providing objective elements to help assess what the AI proposes. This is particularly relevant in sectors such as healthcare (Jacobs 2021, Gaube et al. 2021, Beede et al. 2020), finance, content moderation (Link et al. 2016, Gillespie 2020), or fraud detection.
Decision-support algorithms offer clear benefits when automating simple yet labor-intensive tasks ; by doing so, they ease part of the human workload and, in principle, reduce human resource costs. At the individual level, many minor everyday decisions are commonly delegated to algorithms without harmful consequences, such as recommending the fastest route to work or suggesting songs to listen to.
For more complex tasks involving difficult choices, however, trust in these tools can extend beyond delegating the execution of well-understood processes to also handing over part of the judgment itself, turning the system into a guide, or even an “oracle”.
Automating for better decisions
While delegating everyday decisions may seem reasonable (since they carry little consequence), asking ChatGPT to make an important decision, such as purchasing a property, appears far less so. Such a service is expected to provide recommendations or advice at best, but it would seem unreasonable, for these kinds of issues, to let the algorithm have the final say.
Despite their impressive capabilities, these systems cannot fully substitute for human judgment. For certain types of problems, they prove inadequate for decisions that require not only assessing likelihoods but also considering a broader set of factors, ethical concerns, social context, and long-term consequences of the decision, as in the field of justice.
Human sovereignty and shared decisions
The use of automated systems raises ethical concerns, one possible response being the integration of human oversight to preserve decision-making autonomy and ensure the relevance of decisions.
Thus, the French Data Protection Act (loi informatique et libertés) excludes, in Article 47, judicial decisions based on automated processing and sets strict limits on the automation of decisions that significantly affect individuals. It requires that whenever a decision has a notable impact on a person’s life, it cannot result solely from a fully automated process and must include human intervention, except in the case of individual administrative decisions.
Similarly, the General Data Protection Regulation (GDPR) sets out the same safeguards in Article 22, refining the scope of possible exceptions : when the data subject has given consent, when automated processing is necessary for the performance of a contract, or when explicitly provided for by law.
Effectiveness of procedures
- Unfortunately, human intervention does not always improve the outcomes produced by an automated system and can even worsen them. This is due to the fact that decision-makers may adopt rigid attitudes, either by blindly accepting the system’s suggestions (known as automation bias or acceptance bias), or by rejecting them on principle (known as aversion bias). Such biases undermine performance and, consequently, threaten the reliability of hybrid mechanisms. The article “To Believe or to Doubt: Trust Biases in Decision-Making” explores issues related to human decision-makers’ trust biases, which affect decision quality, and highlights possible avenues for remediation identified by the research community.
- However, trust issues are not simply errors in judgment : they reveal a deeper difficulty in correctly evaluating the results provided by AI, raising the question of the interpretability of outputs. It is therefore crucial to equip decision-makers with the tools and knowledge they need to determine when to follow algorithm recommendations in order to optimize the overall effectiveness of the system and distribute the burden of decision-making.
- The article “Predicting without explaining, or when algorithmic opacity clouds the picture” explores the challenges of understanding the results of automated systems, which can be difficult to interpret or question, and details practical ways to facilitate interaction with the proposed outputs and strengthen the reliability of hybrid decision-making procedures.