Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing




judicial decision-making, judicial biases, artificial intelligence, risk assessment, debiasing


As arbiters of law and fact, judges are supposed to decide cases impartially, basing their decisions on authoritative legal sources and not being influenced by irrelevant factors. Empirical evidence, however, shows that judges are often influenced by implicit biases, which can affect the impartiality of their judgment and pose a threat to the right to a fair trial. In recent years, artificial intelligence (AI) has been increasingly used for a variety of applications in the public domain, often with the promise of being more accurate and objective than biased human decision-makers. Given this backdrop, this research article identifies how AI is being deployed by courts, mainly as decision-support tools for judges. It assesses the potential and limitations of these tools, focusing on their use for risk assessment. Further, the article shows how AI can be used as a debiasing tool, i. e., to detect patterns of bias in judicial decisions, allowing for corrective measures to be taken. Finally, it assesses the mechanisms and benefits of such use.


Allhutter, Doris; Cech, Florian; Fischer, Fabian; Grill, Gabriel; Mager, Astrid (2020): Algorithmic profiling of job seekers in Austria. How austerity politics are made effective. In: Frontiers in Big Data 3 (5), pp. 1–17. DOI:

Angwin, Julia; Larson, Jeff; Mattu, Surya; Kirchner, Lauren (2016): Machine bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. In: ProPublica, 23. 05. 2016. Available online at, last accessed on 22. 01. 2024.

Arnold, David; Dobbie, Will; Hull, Peter (2020): Measuring racial discrimination in bail decisions. In: NBER Working Paper Series, pp. 1–84. DOI:

Barabas, Chelsea; Virza, Madars; Dinakar, Karthik; Ito, Joichi; Zittrain, Jonathan (2018): Interventions over predictions. Reframing the ethical debate for actuarial risk assessment. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81, pp. 62–76. Available online at, last accessed on 22. 01. 2024

Bielen, Samantha; Marneffe, Wim; Mocan, Naci (2021): Racial bias and in-group bias in virtual reality courtrooms. In: The Journal of Law and Economics 64 (2), pp. 269–300. DOI:

Bystranowski, Piotr; Janik, Bartosz; Próchnicki, Maciej; Skórska, Paulina (2021): Anchoring effect in legal decision-making. A meta-analysis. In: Law and Human Behavior 45 (1), pp. 1–23. DOI:

Chatziathanasiou, Konstantin (2022): Beware the lure of narratives. ‘Hungry Judges’ should not motivate the use of “Artificial Intelligence” in law. In: German Law Journal 23 (4), pp. 452–464. DOI:

Chen, Daniel (2019a): Machine learning and the rule of law. In: Michael Livermore and Daniel Rockmore (eds.): Law as Data. Santa Fe, NM: SFI Press, pp. 433–441. DOI:

Chen, Daniel (2019b): Judicial analytics and the great transformation of American law. In: Artificial Intelligence and Law 27 (1), pp. 15–42. DOI:

Chen, Daniel; Loecher, Markus (2019): Mood and the malleability of moral reasoning. In: SSRN Electronic Journal, pp. 1–62. DOI:

Danziger, Shai; Levav, Jonathan; Avnaim-Pesso, Liora (2011): Extraneous factors in judicial decisions. In: Proceedings of the National Academy of Sciences 108 (17), pp. 6889–6892. DOI:

Dietterich, Thomas (2019): Robust artificial intelligence and robust human organizations. In: Frontiers of Computer Science 13 (1), pp. 1–3. DOI:

Dunn, Matt; Sagun, Levent; Şirin, Hale; Chen, Daniel (2017a): Early predictability of asylum court decisions. In: ICAIL ’17. Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law. New York, NY: Association for Computing Machinery, pp. 233–236. DOI:

European Commission (2021): Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. Brussels: European Commission. Available online at, last accessed on 22. 01. 2024.

CEPEJ – European Commission for the Efficiency of Justice (2018): European ethical charter on the use of artificial intelligence in judicial systems and their environment. Strasbourg: Council of Europe. Available online at, last accessed on 22. 01. 2024.

CEPEJ (2023): Resource centre on cyberjustice and AI. Available online at, last accessed on 22. 01. 2024.

Ghasemi, Mehdi; Anvari, Daniel; Atapour; Mahshid; Wormith, Stephen; Stockdale, Keira; Spiteri, Raymond (2021): The application of machine learning to a general risk-need assessment instrument in the prediction of criminal recidivism. In: Criminal Justice and Behavior 48 (4), pp. 518–538. DOI:

Green, Ben; Chen, Yiling (2021): Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts. In: Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2). New York, NY: Association for Computing Machinery, pp. 1–33. DOI:

Green, Ben (2022): The flaws of policies requiring human oversight of government algorithms. In: Computer Law & Security Review 45, pp. 1–22. DOI:

Heaven, Will (2020): Predictive policing algorithms are racist. In: MIT Technology Review, 17. 07. 2020. Available online at, last accessed on 10. 01. 2024.

Heyes, Anthony; Saberian, Soodeh (2019): Temperature and decisions. In: American Economic Journal 11 (2), pp. 238–265. DOI:

Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil (2018): Human decisions and machine predictions. In: The Quarterly Journal of Economics 133 (1), pp. 237–293. DOI:

Larret-Chahine, Louis (2023): Predictice lance assistant, une IA générative pour les professionnels du droit. In: Predictice Blog, 26. 05. 2023. Available online at, last accessed on 10. 01. 2024.

Jordan, Kareem; Bowman, Rachel (2022): Interacting race/ethnicity and legal factors on sentencing decisions. A test of the liberation hypothesis. In: Corrections 7 (2), pp. 87–106. DOI:

Lidén, Moa; Gräns, Minna; Juslin, Peter (2019): ‘Guilty, no doubt’. Detention provoking confirmation bias in judges’ guilt assessments and debiasing techniques. In: Psychology, Crime & Law 25 (3), pp. 219–247. DOI:

Mayson, Sandra (2019): Bias in, bias out. In: The Yale Law Journal 128 (8), pp. 2218–2300. Available online at, last accessed on 10. 01. 2024.

Miller, Andrea (2019): Expertise fails to attenuate gendered biases in judicial decision making. In: Social Psychological and Personality Science 10 (2), pp. 227–234. DOI:

Parasuraman, Raja; Manzey, Dietrich (2010): Complacency and bias in human use of automation. In: Human Factors 52 (3), pp. 381–410. DOI:

Rachlinski, Jeffrey; Wistrich, Andrew (2021): Benevolent sexism in judges. In: San Diego Law Review 58 (1), pp. 101–142. Available online at, last accessed on 22. 01. 2024.

Rassin, Eric (2020): Context effect and confirmation bias in criminal fact finding. In: Legal and Criminological Psychology 25 (2), pp. 80–89. DOI:

Salo, Benny; Laaksonen, Toni; Santtila, Pekka (2019): Predictive power of dynamic (vs. static) risk factors in the Finnish risk and needs assessment form. In: Criminal Justice and Behavior 46 (7), pp. 939–960. DOI:

Steponenaite, Vilte; Valcke, Peggy (2020): Judicial analytics on trial. An assessment of legal analytics in judicial systems in light of the right to a fair trial. In: Maastricht Journal of European and Comparative Law 27 (6), pp. 759–773. DOI:

Shroff, Ravi; Vamvourellis, Konstantinos (2022): Pretrial release judgments and decision fatigue. In: Judgment and Decision Making 17 (6), pp. 1176–120. DOI:

Rudin, Cynthia; Wang, Caroline; Coker, Beau (2020): The age of secrecy and unfairness in recidivism prediction. In: Harvard Data Science Review 2 (1), pp. 1–53 https://doi.10.1162/99608f92.6ed64b30 DOI:

SWR (2022): OLG Stuttgart setzt KI bei Diesel-Klagen ein. In: SWR Aktuell, 24. 10. 2022. Available online at, last accessed on 10. 01. 2024.

Justice Data Lab (2016): Incorporating offender assessment data to the justice data lab process. London: Ministry of Justice. Available online at, last accessed on 22. 01. 2024.

Van Essen, Laurus; Van Alphen, Huib; Van Tuinen, Jan-Maarten (n.d.): Risk assessment the Dutch way. A scalable, easy to use tool for probation reports. In: Confederation of European Probation News. Available online at, last accessed on 22. 01. 2024.

Van Dijck, Gijs (2022): Predicting recidivism risk meets AI act. In: European Journal on Criminal Policy and Research 28 (3), pp. 407–423. DOI:

Wistrich, Andrew; Rachlinski, Jeffrey (2017): Implicit bias in judicial decision making. How it affects judgement and what judges can do about it. In: Sarah Redfield (ed.): Enhancing justice: Reducing bias, pp. 87–130. DOI:

Zenker, Frank (2021): De-biasing legal factfinders. In: Christian Dahlman, Alex Stein and Giovanni Tuzet (eds.): Philosophical foundations of evidence law. Oxford: Oxford University Press, pp. 395–410. DOI:




How to Cite

Lopes G. Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing. TATuP [Internet]. 2024 Mar. 15 [cited 2024 Jun. 21];33(1):28-33. Available from: