Are we ready for judicial AI?

Are we ready for judicial AI?

October 9, 2023 Artificial Intelligence 0
Table of Contents

    Introduction

    The number and variety of tools and services that leverage Artificial Intelligence (AI) capabilities that humans benefit from in their everyday lives have continuously accelerated in the past years. AI is present in different industries, however, in this article, I focus my evaluation on the advantages and disadvantages of applying AI for juridical decision-making considering the ethical perspective.

    The decisions made by a court are supported by legal reasoning and previous trial outcomes, acknowledging the transformation of social ethics. While humans consider emotional aspects when making decisions, AI relies exclusively on data and a predefined algorithm, hence the opinion of a ‘Judicial AI (Sourdin T, 2018)’ is less biased. While these systems might perform better than humans, it is fundamental to implement explainable models as they have a tremendous impact on people’s lives.

    AI in the court

    How could AI support judges?

    The information shared and discussed at the court is recorded and documented, and the judge analyzes the collected materials together with learnings from previously solved cases before sentencing. In a criminal case, the amount of punishment is impacted by a risk assessment, and how likely the individual would re-offend (McKay, 2020). Although it seems the decision is based on a thorough analysis of the determined facts, humans are prejudiced against several demographics. Judges observe the motives and the cultural background of offenders, which result in weight in the sentence. AI however depends on data that is provided during training and the algorithm that is implemented. It is capable of mimicking human behavior and senses, nonetheless, these solutions are not meant to replace humans with machines but rather support judges with recommendations. For example, In Mexico, an AI system, Expertius provides advice to judges to support their decision on whether an individual can be granted with pension (Carneiro et al., 2014).

    Analysis of written documents for a human is time-consuming and requires a tremendous number of manual activities. While looking for a piece of particular information in several files that are consisted of numerous folders good overview of their content is required. Recent innovations in Natural Language Processing (NLP) enable effortless evaluation of documents; NLP solutions can identify key phrases, detect language, extract entities, or return the sentiment of the text. Moreover, NLP solutions can be trained to read forms and generate tabular data of them (Aletras et al., 2016).

    While understanding the advantages of AI, organizations from around the globe are facing ethical issues, some with a potential to cause severe harm whilst using and implementing AI solutions, “regarding algorithms as proprietary products, potentially with in-built statistical bias as well as the diminution of judicial human evaluation in favor of the machine (Angwin et al., 2016)”.

    Organizations that were interviewed for a Capgemini report observed „reasons including the pressure to urgently implement AI, the failure to consider ethics when constructing AI systems, and a lack of resources dedicated to ethical AI systems (Moore, 2019)”.

    Ethical considerations

    To gain accurate knowledge about ethical AI, I collected several resources on the topic, consequently, I do not exclusively rely on my background with Microsoft’s Responsible AI Principles as Microsoft AI Most Valuable Professional (MVP). Scientific journals discussing the potential biases and moral issues of AI systems have existed since the late twentieth century.

    In Australia, a report is written (Dawson D et al., 2019) to discuss the needs and values of ethical principles together with supporting best practices and use cases, “to ensure that existing laws and ethical principles can be applied in the context of new AI technologies”. Eight principles are identified in the paper, including ‘Generates net-benefits’, ‘Do no harm’, ‘Regulatory and legal compliance’, ‘Privacy protection’, ‘Fairness’, ‘Transparency & Explainability’, ‘Contestability’, and ‘Accountability’.

    According to the response from the Society on Social Implications of Technology (SSIT) Australia on the discussion paper, there was a serious need to apply ethics, especially for AI, since ethical frameworks were underdeveloped. SSIT also formed concerns around the principles, such as “human values are only obliquely referenced in the Core Principles (Adamson et al., 2019)”.

    Microsoft’s Responsible AI Principles are similarly identified in the discussion paper; they are defined as cornerstones that put people first, meaning that engineers are working to ensure that AI develops in such a way that can be benefitted in society while warranting people’s trust (Demarco, n.d.).

    Privacy and security

    Data includes personal information including details about previous sentences of judges, various arguments, and legal documents that are meant to support a particular case. This data must be protected, by complying with privacy laws that require transparency about ingestion, usage, storage, and how consumers would use the data.

    Fairness

    Before using the data for model training, it is fundamental to reflect diversity and biases in the data. (Eckhouse et al., 2019) discuss the importance of embedded bias in statistical algorithms for risk assessment, and whether the system predicts a similar level of racially biased judgments when the input data is derived from a prejudicial criminal justice system.

    Algorithms such as STATIC-99R “do not differentiate between the severity of offenses that might be committed (McKay, 2020)”, meaning, the algorithm’s prediction results are not affected by the level of harassment.

    Reliability and safety (Contestability)

    A ‘Judicial AI’ should make the same decisions based on unknown information as on experienced scenarios during training, nevertheless, humans must have the final authority to sentence.

    Transparency (and Explainability)

    Although the goal is to achieve high accuracy, another essential principle is to ensure that users can trust and accept AI solutions. Domain experts provide insights about the function of legal reasoning and judicial decisions are made. Additionally, they can identify potential performance issues, biases, exclusionary practices, or unintended outcomes.

    Another concern while discussing transparency is that “the actual algorithm, its inputs or processes may be protected trade secrets so that individuals impacted by the algorithmic assessment cannot critique or understand the determination (Carlson, 2017)”.

    Conclusion

    While AI solutions can make judicial decisions quick, judges will not trust such sentences entirely until they do not understand how the results are calculated (Zerilli, 2020). The ethical principles discussed above could allow developers and end-users to understand the algorithms, protect the processed data and control the behavior of a ‘Judicial AI’ (McKay, 2020).

    References

    Adamson, G., Broman, M. M., Jacquet, A., Rigby, M., & Wigan, M. (2019). Society on Social Implications of Technology (SSIT) Australia response to the Discussion Paper on Artificial Intelligence: Australia’s Ethics Framework. https://ethicsinaction.ieee.org/

    Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European court of human rights: A natural language processing perspective. PeerJ Computer Science, 2016(10). https://doi.org/10.7717/peerj-cs.93

    Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    Carlson, A. M. (2017). The Need for Transparency in the Age of Predictive Sentencing Algorithms. http://www.wsj.com/articles/wisconsin

    Carneiro, D., Novais, P., Andrade, F., Zeleznikow, J., & Neves, J. (2014). Online dispute resolution: An artificial intelligence perspective. Artificial Intelligence Review, 41(2), 211–240.
    https://doi.org/10.1007/s10462-011-9305-z

    Dawson D, Schleiger E, Horton J, Mclaughlin J, Robinson C, Quezada G, Scowcroft J, & Hajkowicz S. (2019). Artificial Intelligence: Australia’s Ethics Framework.
    https://consult.industry.gov.au/

    Demarco, J. (n.d.). We need rules of the road for responsible AI and data science.

    Eckhouse, L., Lum, K., Conti-Cook, C., & Ciccolini, J. (2019). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior, 46(2), 185–209. https://doi.org/10.1177/0093854818811379

    McKay, C. (2020). Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22–39. https://doi.org/10.1080/10345329.2019.1658694

    Moore, M. (2019). Press contact. www.capgemini.com.

    Sourdin T. (2018). JUDGE V ROBOT? ARTIFICIAL INTELLIGENCE AND JUDICIAL DECISION-MAKING. UNSW Law Journal.

    Zerilli, J. (2020). Algorithmic Sentencing: Drawing Lessons from Human Factors Research.

    Leave a Reply

    Your email address will not be published. Required fields are marked *