Georgina Wins Scholarship with Legal Essay

Posted: 24th September 2020

Georgina has beaten thousands of other students to gain a scholarship on one of Immerse Education’s summer courses.

Immerse Education is an award-winning provider of immersive summer academic programmes for students led by tutors from world-class universities including Cambridge, Oxford and Harvard. They run a hugely popular essay writing competition with winners and runners-up securing scholarships at one of their summer programmes at Cambridge University. This year was the competition’s most successful year with thousands of entries globally.

Georgina was a runner-up in the 16 – 18-year-old category, with her response to the question: “Legal decisions should be automated using algorithms. Discuss.”

 

Georgina Wins Prestigious Essay Writing Competition

 

Georgina’s essay, which you can read in full below, reached the conclusion that although algorithms offer the possibility of “rapid calculation”, the difficulties of verifying their judgement meant they would continue the “maltreatment of minority groups” and “should not be prioritised over erasing the bias within our legal system which can be better-achieved by continuing to have legal decisions made by lawyers.”

Georgina cannot wait to take up her prize, in the summer of 2021:

“I am immensely grateful to have been awarded this opportunity and can only hope I will be able to use it to do something effective in the future.”

Burgess Hill Girls Head, Liz Laybourn, is incredibly proud of Georgina’s success.

“This achievement is indicative of an outstanding student who continues to follow the Burgess Hill Girls ethos by taking opportunities and pushing herself out of her comfort zone. There is no glass ceiling for Georgina!”

 

 

Legal decisions should be automated using algorithms. Discuss.

In our rapidly evolving society algorithms are becoming increasingly pervasive and commonplace
tools. There are, however, many ethical and social issues surrounding their progress; from ingrained
prejudice to security, which are amplified when discussing their place in legal decisions. This essay
aims to establish the primary reasons why algorithmic decisions should not take precedence over
decisions made by lawyers.

The quotidian concept of this argument is that an algorithmic decision would be removed from the
conscious and unconscious biases by which a judge or jury would be influenced (1); although an
algorithm of this complexity and influence, as has been seen in algorithms concerning employment
and standardised grading, would require previous case data in its initial programming to ensure
accuracy. Unless these could be fabricated to reflect the developments of prior-established case law,
then past cases, subjected to the prejudices that society is moving from, would be permanent factors
of the algorithm. Due weight would not be given to progress.

This introduces another issue regarding judge-made law. If the algorithm was created for a judicial
role then cases which require distinguishing (2) may not be possible to detect or be processed as such so
case law would cease its progression as it requires independent thought. Furthermore, a ratio
decidendi would be difficult to carry forward due to the technical complexity. As Završnik states in
his essay on criminal justice and artificial intelligence (3); “A decision-making process that lacks
transparency and comprehensibility is not considered legitimate and non-autocratic.”
Nonetheless, in terms of administrative decisions that concern consulting legislation an algorithm
would save time, minimise case backlog and decrease the impact of human error with objectivity,
whilst allowing money that would have been paid to lawyers to be redistributed. Additionally, as law
concerns almost mathematical logic, an algorithm could assist a lawyer inputting the court’s
interpretation of case facts for relevant statute.

Currently the European Union places restrictions on the use of algorithms (4) so that they are prohibited
in the assistance of legally binding decisions and, when they are used, it must be with the involved
individual’s consent. This is because there are also immense security risks around the protection of
data and the possibility of hacking to interfere with prosecution and/or the due process of law. These overlap in areas of witness protection and the threat posed to the right of a sexual offence victim to
‘anonymity’ (5).
Conversely, the COMPAS (6) (Correctional Offender Management Profiling for Alternative Sanctions)
algorithm has been introduced across the United States to assess potential recidivism risk. In the
parole case of Loomis v. Wisconsin, the Supreme Court conformed to this algorithm; showing the
extent to which our globalised society trusts algorithmic decision-making in negligence of
‘automation bias’.

Therefore algorithms hold the possibility of rapid calculation yet the reality of stunted social
progression and an inherent opacity such that verifying the decisions made would be nearly
impossible; continuing the maltreatment of minority groups. Whereas others may believe the
American approach to be rational, it must be considered that this vies with the importance of data
protection for individuals’ fundamental liberties and the sanctity of legal proceedings. Thus
introducing algorithms should not be prioritised over erasing the bias within our legal system which
can be better-achieved by continuing to have legal decisions made by lawyers.

 

1 K. Klungtveit. Is Artificial Intelligence Good or Bad for Lawyers?. The Lawyer Portal. 04-04-2018.
https://www.thelawyerportal.com/blog/artificial-intelligence-good-bad-lawyers/ (Accessed 06-05-
2020).

2 C. Elliot, F. Quinn. English Legal System, London: Pearson Education, 2010, pp15-30.

3 A. Završnik. Criminal justice, artificial intelligence systems, and human rights. Springerlink. 20-02-
2020. https://link.springer.com/article/10.1007/s12027-020-00602-0 (Accessed 09-07-2020).

4 European Commission. Are there restrictions on the use of automated decision-making?. European
Commission. https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-andorganisations/dealing-citizens/are-there-restrictions-use-automated-decision-making_en#references
(Accessed 07-08-2020).

5 Crown Prosecution Service. Witness protection and anonymity, Legal guidance. 2017.
https://www.cps.gov.uk/legal-guidance/witness-protection-andanonymity#:~:text=The%20victim%20in%20a%20case,public%20to%20identify%20the%20victim.
(Accessed 29-07-2020).

6 A. Završnik. Criminal justice, artificial intelligence systems, and human rights. Springerlink. 20-02-
2020. https://link.springer.com/article/10.1007/s12027-020-00602-0 (Accessed 09-07-2020).

 

Reference page:

 

Are there restrictions on the use of automated decision-making?. European Commission.
https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-andorganisations/dealing-citizens/are-there-restrictions-use-automated-decision-making_en#references (Accessed 07-08-2020).

Brooke, H., Algorithms, Artificial Intelligence and the Law. Supreme Court. 12-11-2019.
https://www.supremecourt.uk/docs/speech-191112.pdf (Accessed 29-06-2020).

Elliot, C., Quinn, F., English Legal System, London: Pearson Education, 2010, pp15-30.

Harris, B., Could an AI ever replace a judge in Court?. World Government Summit. 11-07-2018
https://www.worldgovernmentsummit.org/observer/articles/could-an-ai-ever-replace-a-judge-in-court (Accessed 21-08-2020).

Irving, A., Rise of the Algorithms. UK Human Rights Blog. 04-11-2019.
https://ukhumanrightsblog.com/2019/11/04/rise-of-the-algorithms/ (Accessed 09-08-2020)

Klungtveit, K., Is Artificial Intelligence Good or Bad for Lawyers?. The Lawyer Portal. 04-04-2018.
https://www.thelawyerportal.com/blog/artificial-intelligence-good-bad-lawyers/ (Accessed 06-05-
2020).

Witness protection and anonymity, Legal guidance. The Crown Prosecution Service. 2017.
https://www.cps.gov.uk/legal-guidance/witness-protection-andanonymity#:~:text=The%20victim%20in%20a%20case,public%20to%20identify%20the%20victim. (Accessed 29-07-2020).

Završnik, A., Criminal justice, artificial intelligence systems, and human rights. Springerlink. 20-02-
2020. https://link.springer.com/article/10.1007/s12027-020-00602-0 (Accessed 09-07-2020).