RESEARCH ARTICLE

AI and access to justice: How AI legal advisors can reduce economic and shame-based barriers to justice

Brandon Long*, 1 , Amitabha Palmer2

* Corresponding author: brlong@bgsu.edu

1 Department of Philosophy, Bowling Green State University, Bowling Green, US

2 MD Anderson Cancer Center, The University of Texas, Houston, US

Abstract  ChatGPT – a large language model – recently passed the U.S. bar exam. The startling rise and power of generative artificial intelligence (AI) systems such as ChatGPT lead us to consider whether and how more specialized systems could be used to overcome existing barriers to the legal system. Such systems could be employed in either of the two major stages of the pursuit of justice: preliminary information gathering and formal engagement with the state’s legal institutions and professionals. We focus on the former and argue that developing and deploying publicly funded AI legal advisors can reduce economic and shame-based cultural barriers to the information-gathering stage of pursuing justice.

KI und Rechtszugang: Wie rechtsberatende KI-Systeme wirtschaftliche und schambedingte Barrieren für den Rechtszugang abbauen können

Zusammenfassung  ChatGPT – ein ‚Large Language Model‘ – hat kürzlich die US-amerikanische Anwaltsprüfung bestanden. Der erstaunliche Erfolg und die Leistungsfähigkeit generativer Systeme künstlicher Intelligenz (KI) wie ChatGPT veranlassen uns zu der Überlegung, ob und wie spezialisiertere Systeme eingesetzt werden könnten, um bestehende Barrieren im Rechtssystem zu überwinden. Solche Systeme könnten in den zwei wichtigsten Phasen der Rechtsfindung eingesetzt werden: der vorbereitenden Informationsbeschaffung und der formellen Zusammenarbeit mit den staatlichen Rechtsinstitutionen und -expert*innen. Wir konzentrieren uns auf Erstere und argumentieren, dass die Entwicklung und der Einsatz öffentlich finanzierter rechtsberatender KI-Systeme wirtschaftliche und schambedingte kulturelle Barrieren in der Informationsbeschaffungsphase der Rechtsfindung abbauen können.

Keywords  artificial intelligence, shame, barriers to justice, philosophy of technology, law

This article is part of the Special topic “AI for decision support: What are possible futures, social impacts, regulatory options, ethical conundrums and agency constellations?,” edited by D. Schneider and K. Weber. https://doi.org/10.14512/tatup.33.1.08

© 2024 by the authors; licensee oekom. This Open Access article is licensed under a Creative Commons Attribution 4.0 International License (CC BY).

TATuP 33/1 (2024), S. 21–27, https://doi.org/10.14512/tatup.33.1.21

Received: 22. 08. 2023; revised version accepted: 04. 01. 2024; published online: 15. 03. 2024 (peer review)

Introduction

In most countries, the legal system is the primary institution through which citizens pursue justice when wronged or to exercise their rights. However, in many of these countries, a variety of barriers prevent citizens from accessing the legal system (OECD 2015). The primary barriers are economic, cultural, and political and they generate significant externalities such as social exclusion, government dependence, weaker business assurances, and higher healthcare costs (OECD 2015, p. 4). It is not only a just but a prudent state that seeks to reduce or eliminate barriers to justice.

ChatGPT – a large language model (LLM) – recently passed the U.S. bar exam (Arredondo et al. 2023). The startling rise and power of generative AI systems such as ChatGPT lead us to consider whether and how more specialized systems could be employed to overcome existing barriers to the legal system. Broadly, such systems could be employed in either of two major stages of the pursuit of justice: preliminary information gathering and formal engagement with the State’s legal institutions and professionals. We focus on the former and argue developing and deploying publicly funded artificial intelligence legal advisors (AI LAs) can reduce economic and shame-based cultural barriers to the information-gathering stage of pursuing justice.

By AI LA we have in mind a system that provides potential litigants with reliable legal information that is specific and intelligible to allow them to make informed decisions whether to formally contract a lawyer and/or formally pursue their claims in court. Several similar legal AI platforms already exist (AI Lawyer 2023; Rattray 2023; Casetext 2022) and will likely only improve over time. In one London-based law firm from November 2022 to February 2023, Harvey AI was used by 3,500 of their lawyers to ask 40,000 legal questions during their day-to-day work (Rattray 2023). Specialized AI models can already give advice for specific domains of law. For example, JusticeBot (Tribunal administratif du logement 2023), a free tool for Quebec housing law, takes case facts into account in giving legal advice, asks pertinent questions, and cites similar cases for each relevant legal claim.

Our thesis is that artificial intelligence legal advisors can reduce barriers to justice.

Throughout, we do not claim that highly reliable AI LAs currently exist but assume they will be available soon. As such, a foresighted technology assessment should consider their use before such technologies are developed and become available. We advocate that AI LAs be publicly funded. While a privately developed AI LA could achieve the same technical goals, a publicly funded AI LA supports broader economic accessibility. Moreover, democratic governments and international organizations should support implementation of AI LAs. Access to justice is intrinsically good but also provides instrumental benefits. It is a crucial prerequisite for establishing legal confidence and trust, thereby creating a favorable business environment, attracting investments, and contributing to overall economic spending (The Perryman Group 2009, pp. 19–21). Growing evidence suggests that the ability to address legal issues and obtain justice has a positive impact on inclusive economic growth (OECD 2013, p. 2, 2015, pp. 1–4). This impact is manifested through job creation, reduced absences at work due to legal problems (Task Force on Justice 2019, p. 45), improved housing stability, resolution of debt, and stimulating growth by instilling confidence in assurance (Stolper et al. 2007, pp. 8–9). Equal access to justice may also foster economic growth by establishing a level playing field (Task Force on Justice 2019, pp. 39–41), especially for small or medium economic participants (OECD 2015, pp. 3–4). It also facilitates enforcement of contracts, encourages fair competition, and instills confidence in regulatory frameworks (OECD 2015, pp. 9–10). Thus, supporting access to justice can play a role in assisting individuals to overcome severe forms of social exclusion and ensuring equal opportunities for economic advancement.

To support our thesis that AI LAs can reduce barriers to justice, we (1) outline common economic and shame-based cultural barriers to pursuing legal justice, (2) describe how an AI LA can mitigate barriers during the information-gathering stage, and (3) address potential limitations and harms. Our scope for these claims is Anglo-American common law systems. This brings with it unique barriers to legal aid and implementing AI systems, and a specific common law that may or may not generalize more globally.

Economic barriers

Economic barriers to legal aid seeking

This section reviews economic barriers to justice and suggests how an AI LA could reduce them during the information-gathering stage. Economic barriers are not only financial, but also the opportunity cost of time spent on information-seeking and transportation.

A substantial body of evidence finds people with low socio-economic status (SES) face greater barriers to the legal system – and therefore they also face greater barriers to gain access to justice (Commission on Legal Empowerment of the Poor 2008, pp. 6–9; Legal Services Corporation 2022, sec. 5; OECD 2015, p. 7). Poverty, poverty-related discrimination, and distrust present barriers to justice globally (Beqiraj and McNamara 2014, chaps. 4–5). What is more, marginalized – including economically marginalized – populations face unique barriers to justice in the UK (Gill et al. 2021) and in Canada (Silverman and Molnar 2016).

Financial barriers influence whether people pursue justice through the legal system. Across OECD countries, 42 % to 90 % of individuals who opt out of pursuing legal aid attribute their decision to financial considerations, whether real or perceived (OECD 2015, p. 5). Further, for 92 % of legal problems low-income Americans face, they do not receive any or enough legal aid (Legal Services Corporation 2022, pp. 47–48). Moreover, education and accessible information are also barriers to the justice system –53 % of low-income Americans do not know if they are able to find an affordable lawyer if needed (Legal Services Corporation 2022, pp. 51–52). These and other barriers lead them to pursue litigation at lower rates than higher SES groups. For example, in medical contexts, lower SES groups pursue litigation at lower rates when compared to other groups because of a lack of access to legal resources and the nature of the contingency fee system in medical malpractice claims (McClellan et al. 2012; Viser 2022).

Overcoming economic barriers with publicly funded AI legal advisors

To specify ways in which AI technology can reduce economic barriers to justice we propose conceiving the pursuit of legal justice as having two stages:

  1. Information gathering: In this stage, one seeks to determine whether one has a claim, evaluate the strength of that claim, and evaluate the cost-benefit tradeoffs of formally pursuing one’s legal claim.
  2. Formal engagement: In this stage, one hires a lawyer and engages with state officials and the court system to pursue one’s claim.

People with limited economic resources cannot frivolously engage with an economically onerous legal system. Before one invests resources to make an informed formal pursuit of a claim, one must have some sense of one’s prospects for success and the underlying legal reasoning. Hence, existing economic barriers to information gathering prevent people who, unbeknownst to them, have strong claims and might have pursued them formally had they possessed this knowledge. We believe AI LAs are well-suited to address economic barriers to information gathering.

We have in mind an AI LA that could provide prospective litigants with (a) an assessment of legal considerations and reasoning involved in their claim, (b) a crude assessment of their case’s likelihood of success in court, (c) an interactive lay explanation of (a) and (b).

A government-funded artificial intelligence legal advisor can provide legal information sought without imposing burdensome costs.

Assessment of legal considerations would include explanations of which laws apply, why and how they apply, and how similar cases have been treated. The crude assessment of the likelihood of legal success would be expressed as ‘poor,’ ‘unlikely,’ ‘unknown,’ ‘fair,’ or ‘strong.’ Critically, like existing LLMs, AI advisors will be conversational, allowing users to ask follow-up questions and clarifications.

Citizens with a limited understanding of their legal situation, the likelihood of success, and scarce economic resources may be hesitant to approach a lawyer. Proposed capabilities for the AI LA align with the information citizens seek during information gathering. Unlike consulting a lawyer, a government-funded AI LA can provide legal information sought without imposing burdensome financial, time, and transportation costs. Moreover, if it is publicly funded, citizens bear minimal direct costs and online accessibility eliminates transportation and mitigates time costs.

Under this model, citizens who otherwise might not have pursued legitimate claims due to economic barriers in the information-gathering stage may now choose to pursue them. Nevertheless, economic barriers are not the only barriers to justice. We now turn to investigating how legal AI can address cultural and shame-based barriers to justice.

Cultural barriers to justice

In this section, we (a) define shame and how it relates to cultural norms, (b) explain how it can pose a barrier to pursuing justice in the information-gathering stage, (c) suggest how a publicly funded AI LA can mitigate barriers to legal information seeking. Notice in the following that while economic barriers to justice may be addressed by funding human legal resources, AI LAs have unique features that address shame-based barriers in ways additional funding cannot.

Shame, stigma, culture

Shame is a “negative emotion that arises when one is seen and judged by others (whether they are present, possible or imagined) to be flawed in some crucial way, or when some part of oneself is perceived to be inadequate, inappropriate or immoral” (Dolezal and Lyons 2017, p. 257). Shame influences behavior because it can threaten one’s feelings of belonging and acceptance within interpersonal contexts, socially, and politically (Walker and Bantebya-Kyomuhendo 2014).

Shame can be acute or chronic. Acute shame is a single episode that arises unexpectedly, as in cases of embarrassment where in social interaction, one’s self-presentation falters, fails, or falls short of socially desired modes of comportment (Dolezal and Lyons 2017). Chronic shame is often a result of general social stigma directed at marginalized social groups. For instance, shame is linked to racism, discrimination (Harris-Perry 2011), low SES (Walker and Bantebya-Kyomuhendo 2014), and body size (Farrell 2011).

Shame is often a function of cultural norms. Groups use shame to reinforce norms by stigmatizing norm violators in two ways: by stigmatizing observed behavior of individuals, or by stigmatizing one’s relationship or belonging to a particular social group (Goffman 1986). Stigmatization gives rise to feelings of shame among stigmatized which in turn disincentivizes or incentivizes certain behaviors (Battle 2019, p. 645). Hence, stigma, shame, and cultural norms interact to influence behavior.

The justice system and social norms can leverage the power of shame through stigmatization to prevent certain behaviors and incentivize others. This is neither good nor bad, but rather depends on the nature of the norms being supported. Stigmatizing theft or domestic violence isn’t a bad cultural practice. However, we suggest shame and shame-inducing norms that impede the legitimate pursuit of justice are prima facie bad and should be eliminated. These include norms against litigation due to group membership and norms against disclosing information that stigmatizes or shames.

In the cases below, we show how AI LAs can mitigate shame-based barriers to justice due to the human propensity not to feel judged when interacting with AI (Bartneck et al. 2010; Holthöwer and Van Doorn 2023).

Overcoming shame-based cultural barriers with publicly funded AI legal advisors

First case: Shame-based barriers to justice for victims of intimate partner violence

Victims of intimate partner violence (IPV) often forgo pursuing justice because the stigmatization of being a victim can lead to shame (Overstreet and Quinn 2013). IPV survivors grapple with a lasting sense of shame after their experience, stemming from lost self-identity, self-blame, and fear of judgment (Camp 2022, p. 103). Seeking help often leads to encounters with people or institutions – including the legal system – that worsen rather than alleviate this shame (Camp 2022, pp. 136–137). Understandably, individuals who perceive a stigma associated with being a victim of IPV are less likely to seek institutional support, and when they do disclose experiences, they prefer indirect language that hints at abuse without disclosing details (Williams and Mickelson 2008).

The most common legal intervention for victims of IPV is protection orders, vital tools for responding to IPV. Yet, obtaining a protection order requires survivors to enter “a process that often deprives them of their privacy and ability to control their self-image – experiences anchored in shame” (Camp 2022, pp. 103–104). This suggests shame may both be a barrier to disclosing information and for seeking aid. For example, one U.S. abuse shelter from 2004–2008 found of all victims of IPV, only 32.25 % had protection orders upon appearing at the shelter (Durfee and Messing 2012).

AI LAs can mitigate shame-based barriers to information gathering for victims of IPVs. Interacting with an AI rather than a human restores privacy and eliminates shame that can be induced by the presence of others which can allow a victim of IPV to safely learn (a) what legal recourse and protections are available to them, (b) how to pursue legal recourse/protection, (c) whether their circumstances meet legal criteria, (d) the likelihood they will succeed in obtaining legal recourse or protection, and (e) all the above in an interactive lay-friendly language. Certainly, victims of IPV will have to engage with humans if they decide to pursue recourse formally. However, AI LAs allow the acquisition of information for an informed legal decision. Moreover, since social stigmas associated with being a victim of IPV are unjustified and harmful, access to an AI LA justifiably reduces shame-based harms to victims of IPVs.

Second case: Shame-based barriers to justice for marginalized groups when cultural norms obscure legal rights

Cultural norms may stigmatize seeking legal aid for women or people in positions of lower social status where fear of reprisal or shame keeps legal complaints underserved (Long Chamness and Ponce 2019, pp. 13–17). This may be common outside the U.S., where compensation culture is weak, nonexistent, or displaced by other norms in specific contexts. Consider the following case involving inheritance rights.

In some communities, there is a cultural expectation that women will relinquish their inheritance rights to their brothers when their parents die (Nayeen 2020). In doing so, a woman protects and ensures their culturally coveted status of a ‘good sister’. Conversely, failure to relinquish her right puts her social status in jeopardy and incurs stigma from norm violation.

The prevalence of a cultural norm for women to relinquish their inheritance rights can create confusion about what legal inheritance rights women have. Moreover, it can prevent women from inquiring about their rights as this could be interpreted as a pre-emptive norm violation (Nayeen 2020). Even inquiring into one’s rights can incur stigma or shame – especially if one isn’t yet sure of the extent of one’s rights or whether one would indeed pursue them.

This cost often prohibits investigating legal rights, which prevents obtaining information required to make informed legal decisions. With full information, a woman may reason the benefit of asserting her inheritance rights offsets the social cost of norm violation. Furthermore, social norms can never be overturned until women living in such communities accurately understand their rights. In short, the conflation between social norms and legal rights deprives women (and other similarly situated marginalized groups) of the opportunity to make informed decisions regarding whether they wish to exercise their inheritance rights.

An AI LA can provide women with the information necessary to make informed decisions regarding tradeoffs between social sanction and exercising inheritance rights (or other rights) is worth it. Such information includes: (a) clarifying any confusion with respect to the nature and extent of the rights in question, (b) other legal variables, (c) the legal process required for exercising rights, (d) a coarse-grained assessment of the case’s likelihood of success in court, and (e) an interactive lay explanation of preceding information.

This is how our model can mitigate cultural barriers to justice in the information-gathering stage. An AI LA permits private inquiries into rights in a way immune to shame. This is most true in small or tight-knit communities where being seen walking into a law office could cause gossip and shame.

Third case: Shame as a barrier to justice for victims of fraud

Disclosing to others that one has been a victim of fraud brings about acute shame that can prevent victims from pursuing justice. In the U.S., for example, fraud is an enormous problem, as consumers lost nearly $9 billion in 2022 (Fair 2023). However, victims rarely come forward and pursue justice. A survey conducted by The American Association of Retired Persons (AARP) found an estimated 15 % contacted authorities (Williams 2023). Another report found that 30 % of respondents indicated they would be embarrassed to admit to being a victim of a financial scam, whether to friends, family, or authorities (Aviva 2021, p. 11).

Artificial intelligence legal advisors are permissibly deployed when they are as reliable and accurate as human lawyers.

Shame may lead such victims to forgo information-gathering since they can only identify costs (shame) without the benefit. Hence, such victims frequently do not pursue legal claims because they never acquire the information to evaluate benefits.

Again, a publicly funded AI LA could reduce shame-induced barriers to pursuing justice in the information-gathering stage for victims of fraud. Formally pursuing legal recourse requires understanding at minimum (a) whether the law applies to one’s case, (b) what one is entitled to in a successful judgment, (c) a coarse-grained assessment of one’s chances of success, and (d) an interactive lay explanation of the above. The private nature of interactions with AI shields victims from the potential gossip and shame of interacting with a human. An AI LA can mitigate shame-based barriers to pursuing justice by reducing the shame-based cost of seeking legal information a victim needs to make an informed decision regarding tradeoffs between shame-based (and other) costs and benefits of formally pursuing litigation.

Discussion

One worry with AI LAs are reliability and accuracy standards (Grimm et al. 2021). However, our position does not depend on the reliability and accuracy of current systems. Rather, we hold that such systems are permissibly deployed when they are as reliable and accurate as human lawyers. Early investigations suggest that high levels of reliability and accuracy will be attained in specific domains (e.g., the above mentioned JusticeBot for housing law) before an all-purpose LLM can handle all legal domains (Deakin and Markou 2020; Hildebrandt 2016). Insofar as this is the case, we support domain-specific models since some mitigation of the barriers to justice is better than no mitigation.

Harms

Our proposal might result in increased caseloads in a legal system, which is concerning due to increased funding needs for typically underfunded and overburdened systems. This legitimate concern points to the inevitable trade-offs emergent technologies generate.

However, this concern relies on the assumption that access to AI LAs will only increase the number of cases going to trial. AI LAs may also reduce litigation in some cases. Litigants who go to trial systematically overestimate their chance of success (Korobkin and Guthrie 1994; Weinstein 2002). By providing litigants with estimates of success, litigants who overestimated their likelihood of gain may not pursue a trial when they otherwise would have. Reducing litigant miscalculation may also lead to more settlements than trials (Korobkin and Guthrie 1994; Priest and Klein 1984), and settlements are less burdensome to the legal system. We do not speculate on the net change of cases in the legal system, but the effects will likely work in both directions.

Finally, this concern overlooks the benefits of broader AI implementation within the law. Tasks that used to take days of research can now be completed in minutes. The cost and time associated with each case will likely decrease with widespread AI implementation.

Bias

A common concern with many AI systems is that they can inherit and reproduce biases in their training data. This concern also applies to AI LAs who will have been trained in case law rife with historical biases. This topic of biases in AI is large and ongoing. An exhaustive treatment goes beyond the scope of this research article. However, a brief response is warranted.

First, the question of biases will always be comparative. There is unlikely ever to exist any human-developed system free of all biases. The question, therefore, is whether an AI LA could have fewer biases than the current system. We believe the answer is ‘yes’ because it is much easier to alter the biases of an AI than it is of the individuals and institutions that compose the entire legal system.

Bias inhabits AI systems within their training data, algorithms, and outputs. We know that biased data leads to biased algorithms. Therefore, it’s possible to mitigate bias through debiasing the training set or through careful selection of training data. Where this isn’t possible, it’s possible to adjust algorithms that we know were developed using biased data. Finally, if we know in advance that an AI’s outputs are biased, it’s possible to have the AI correct in the other direction (Fazelpour and Danks 2021). While these correction measures are not easy or foolproof, they are easier and more likely to succeed than attempting to correct the implicit and systemic biases of every individual and institution that compose current justice systems. Finally, addressing biases in a legal system can happen concurrently while addressing biased AI LAs in the ways we have mentioned.

Responsibility

AI generates questions about legal responsibility within existing legal frameworks (Beckers and Teubner 2021). An AI LA could make two major kinds of errors that lead to harm which raise questions about responsibility and liability: The AI (a) recommends pursuing litigation when there is no viable claim, or (b) recommends abstaining from litigation when in fact there is a viable claim. The topic of responsibility in AI ethics is rich and complicated and cannot be addressed comprehensively within the constraints of this research article. Nevertheless, a few brief remarks are in order.

In the first case, the issues of responsibility and liability are relatively unproblematic. The AI LA recommends pursuing a claim which leads the user to contact a lawyer. If the AI has erred, the lawyer should explain why further legal action would be inappropriate. If the lawyer is correct, there is no harm save a consultation fee. If the lawyer is incorrect, the lawyer bears the responsibility just as they are currently held responsible for poor legal advice.

In the second case, a fund liability model is appropriate (Beckers and Teubner 2021, pp. 139–140). In this model, a regulatory agency creates and administers a fund or insurance to provide compensation for harm. Firms in the industry sector finance the fund according to their market share and the agency determines ex-post liability for each case/class of cases. Finally, such a model will require that AI LA be audited at appropriate intervals since naive individuals will not know when the AI’s advice not to litigate is mistaken.

Conclusion

We have explained how economic cost and shame present barriers to accessing the justice system, how AI LAs may alleviate these barriers, and have covered some limitations and harms of such LLMs. There is no one solution to every legal barrier for everyone, but AI LAs present several viable solutions. Such advisors can reduce economic and shame-based barriers to the information-gathering stage of pursuing justice. This is significant since lack of information is itself a barrier to informed decision-making regarding whether to formally pursue justice. We take the value of justice to be intrinsic and self-evident, therefore, expanding access to justice is a good thing. The legal system becomes more just when the cases reaching the court do so based on merit rather than arbitrary barriers.

Funding  This work received no external funding.

Competing interests  The authors declare no competing interests.

References

AI Lawyer (2023): AI lawyer blog. Available online at https://ailawyer.pro/blog, last accessed on 03. 01. 2024.

Arredondo, Pablo; Driscoll, Sharon; Schreiber, Monica (2023): GPT-4 passes the bar exam. What that means for artificial intelligence tools in the legal profession. In: Stanford Law School Blog. Available online at https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/, last accessed on 03. 01. 2024.

Aviva (2021): The Aviva fraud report. The online fraud epidemic during the pandemic. London: Aviva. Available online at https://static.aviva.io/content/dam/aviva-corporate/documents/newsroom/pdfs/reports/Aviva_Fraud_Report_2021.pdf, last accessed on 03. 01. 2024.

Bartneck, Christoph; Bleeker, Timo; Bun, Jeroen; Fens, Pepijn; Riet, Lynyrd (2010): The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. In: Paladyn, Journal of Behavioral Robotics 1 (2), pp. 109–115. https://doi.org/10.2478/s13230-010-0011-3

Battle, Brittany (2019): “They look at you like you’re nothing”. Stigma and shame in the child support system. In: Symbolic Interaction 42 (4), pp. 640–668. https://doi.org/10.1002/symb.427

Beckers, Anna; Teubner, Gunther (2021): Three liability regimes for artificial intelligence. Algorithmic actants, hybrids, crowds. Oxford: Hart. https://doi.org/10.5040/9781509949366

Beqiraj, Julinda; McNamara, Lawrence (2014): International access to justice. Barriers and solutions. London: International Bar Association. Available online at https://www.biicl.org/documents/485_iba_report_060215.pdf?showdocument=1, last accessed on 03. 01. 2024.

Camp, A. Rachel (2022): From experiencing abuse to seeking protection. Examining the shame of intimate partner violence. In: UC Irvine Law Review 13 (1), pp. 103–154. Available online at https://scholarship.law.uci.edu/ucilr/vol13/iss1/7, last accessed on 03. 01. 2024.

Casetext (2022): Westlaw, lexis outranked by Casetext on G2. Casetext blog, 27. 06. 2023. Available online at https://casetext.com/blog/casetext-to-join-thomson-reuters-ushering-in-a-new-era-of-legal-technology-innovation, last accessed on 03. 01. 2024.

Commission on Legal Empowerment of the Poor (2008): Making the law work for everyone. New York, NY: United Nations Development Programme. Available online at https://digitallibrary.un.org/record/633966?ln=en, last accessed on 03. 01. 2024.

Deakin, Simon; Markou, Christopher (eds.) (2020): Is law computable? Critical perspectives on law and artificial intelligence. Oxford: Hart. https://doi.org/10.5040/9781509937097

Dolezal, Luna; Lyons, Barry (2017): Health-related shame. An affective determinant of health? In: Medical Humanities 43 (4), pp. 257–263. https://doi.org/10.1136/medhum-2017-011186

Durfee, Alesha; Messing, Jill (2012): Characteristics related to protection order use among victims of intimate partner violence. In: Violence Against Women 18 (6), pp. 701–710. https://doi.org/10.1177/1077801212454256

Fair, Lesley (2023): FTC crunches the 2022 numbers. See where scammers continue to crunch consumers. In: FTC Buisness Blog, 23. 02. 2023. Available online at https://www.ftc.gov/business-guidance/blog/2023/02/ftc-crunches-2022-numbers-see-where-scammers-continue-crunch-consumers, last accessed on 03. 01. 2024.

Farrell, Amy (2011): Fat shame. Stigma and the fat body in American culture. New York, NY: New York University Press.

Fazelpour, Sina; Danks, David (2021): Algorithmic bias. Senses, sources, solutions. In: Philosophy Compass 16 (8), p. e12760. https://doi.org/10.1111/phc3.12760

Gill, Nick et al. (2021): The tribunal atmosphere. On qualitative barriers to access to justice. In: Geoforum 119, pp. 61–71. https://doi.org/10.1016/j.geoforum.2020.11.002

Goffman, Erving (1986): Stigma. Notes on the management of spoiled identity. New York, NY: Simon & Schuster.

Grimm, Paul; Grossman, Maura; Cormack, Gordon (2021): Artificial intelligence as evidence. In: Northwestern Journal of Technology and Intellectual Property 19 (1), pp. 9–106. Available online at https://scholarlycommons.law.northwestern.edu/njtip/vol19/iss1/2, last accessed on 04. 01. 2024.

Harris-Perry, Melissa (2011): Sister citizen. Shame, stereotypes, and black women in America. New Haven, CT: Yale University Press.

Hildebrandt, Mireille (2016): Law as information in the era of data-driven agency. In: The Modern Law Review 79, pp. 1–29. https://doi.org/10.1111/1468-2230.12165

Holthöwer, Jana; Van Doorn, Jenny (2023): Robots do not judge. Service robots can alleviate embarrassment in service encounters. In: Journal of the Academy of Marketing Science 51 (4), pp. 767–784. https://doi.org/10.1007/s11747-022-00862-x

Korobkin, Russell; Guthrie, Chris (1994): Psychological barriers to litigation settlement. An experimental approach. In: Michigan Law Review 93 (1), pp. 107–192. https://doi.org/10.2307/1289916

Legal Services Corporation (2022): The justice gap. The unmet civil legal needs of low-income Americans. Washington, DC: Legal Services Corporation. Available online at https://lsc-live.app.box.com/s/xl2v2uraiotbbzrhuwtjlgi0emp3myz1, last accessed on 03. 01. 2024.

Long Chamness, Sarah; Ponce, Alejandro (2019): Measuring the justice gap. A people-centered assessment of unmet justice needs around the world. Washington, DC: World Justice Project. Available online at https://worldjusticeproject.org/our-work/research-and-data/access-justice/measuring-justice-gap, last accessed on 04. 01. 2024.

McClellan, Frank; White, Augustus; Jimenez, Ramon; Fahmy, Sherin (2012): Do poor people sue doctors more frequently? Confronting unconscious bias and the role of cultural competency. In: Clinical Orthopaedics & Related Research 470 (5), pp. 1393–1397. https://doi.org/10.1007/s11999-012-2254-2

Nayeen, Zulker (2020): Social and cultural barriers in accessing civil justice system. In: The Daily Star, 11. 02. 2020. Available online at https://www.thedailystar.net/law-our-rights/news/social-and-cultural-barriers-accessing-civil-justice-system-1866442, last accessed on 04. 01. 2024.

OECD (2013): What makes civil justice effective? In: OECD Economics Department Policy Note 18, pp. 1–16. Available online at https://web-archive.oecd.org/2013-06-20/238744-Civil%20Justice%20Policy%20Note.pdf, last accessed on 04. 01. 2024.

OECD (2015): Equal access to justice. Expert roundtable notes. Paris: OECD. Available online at https://www.oecd.org/gov/Equal-Access-Justice-Roundtable-background-note.pdf, last accessed on 04. 01. 2024.

Overstreet, Nicole; Quinn, Diane (2013): The intimate partner violence stigmatization model and barriers to help seeking. In: Basic and Applied Social Psychology 35 (1), pp. 109–122. https://doi.org/10.1080/01973533.2012.746599

Priest, George; Klein, Benjamin (1984): The selection of disputes for litigation. In: The Journal of Legal Studies 13 (1), pp. 1–55. https://doi.org/10.1086/467732

Rattray, Kate (2023): Harvey AI. What we know so far. In: Clio Blog, 10. 10. 2023. Available online at https://www.clio.com/blog/harvey-ai-legal/, last accessed on 03. 01. 2024.

Silverman, Stephanie; Molnar, Petra (2016): Everyday injustices. Barriers to access to justice for immigration detainees in Canada. In: Refugee Survey Quarterly 35 (1), pp. 109–127. https://doi.org/10.1093/rsq/hdv016

Stolper, Antonia; Walker, Mark; Sabatini Christopher; Marczak Jason (2007): Rule of law, economic growth, and prosperity. New York, NY: Americas Society and Council of the America Rule of Law Working Group. Available online at https://www.as-coa.org/sites/default/files/Rule%20of%20Law.pdf, last accessed on 03. 01. 2024.

Task Force on Justice (2019): Justice for all. Final report. New York, NY: Center on International Cooperation. Available online at https://www.sdg16.plus/resources/justice-for-all-report-of-the-task-force-on-justice/, last accessed on 03 January 2024.

The Perryman Group (2009): The impact of legal aid services on economic activity in Texas. An analysis of current efforts and expansion potential. Waco, TX: The Perryman Group. Available online at https://legalaidresearch.org/2020/02/04/the-impact-of-legal-aid-services-on-economic-activity-in-texas-an-analysis-of-current-efforts-and-expansion-potential/, last accessed on 04. 01. 2024.

Tribunal administratif du logement (2023): JusticeBot – interactive legal information tool. Available online at https://www.tal.gouv.qc.ca/en/justicebot-interactive-legal-information-tool, last accessed on 03. 01. 2024.

Viser, Cassidy (2022): The economics of injustice. Stratification in medical malpractice claims by poor and vulnerable patients. In: Georgetown Journal on Poverty Law and Policy 29 (2), pp. 273–285. Available online at https://www.law.georgetown.edu/poverty-journal/wp-content/uploads/sites/25/2022/05/GT-GPLP220017-3.pdf-Cassidy-Viser.pdf, last accessed on 04. 01. 2024.

Walker, Robert; Bantebya-Kyomuhendo, Grace (2014): The shame of poverty. Oxford: Oxford University Press.

Weinstein, Ian (2002): Don’t believe everything you think. Cognitive bias in legal decision making. In: Clinical L. Rev. 8 (783), pp. 783–834. Available online at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2779670, last accessed on 04. 01. 2024.

Williams, Alicia (2023): Consumer fraud awareness gets d grade. In: AARP Research, 17. 05. 2023. https://doi.org/10.26419/res.00606.001

Williams, Stacey; Mickelson, Kristin (2008): A paradox of support seeking and rejection among the stigmatized. In: Personal Relationships 15 (4), pp. 493–509. https://doi.org/10.1111/j.1475-6811.2008.00212.x

Authors

Brandon Long

is currently an M. A. student at Bowling Green State University and will be seeking a PhD after its completion. He is currently working on bioethical arguments for and against genetic enhancement.

Amitabha Palmer, PhD

holds a PhD in Philosophy and is a HEC-C certified Clinical Ethicist. He is an Instructor and Clinical Ethicist at the University of Texas MD Anderson Cancer Center. His primary areas of research include the ethics of AI, medical ethics, political philosophy, and the effects of medical misinformation on clinical interactions.