Cambridge University Society for Women Lawyers

Kindly Sponsored by

Ashurst Logo.png
  • CUSWL

Artificial Intelligence in the Law: Help or Hindrance?

Written by the Second Runner-Up of the CUSWL Law and Tech Essay Competition - Aoibhín Spriggs

Artificial Intelligence and its growing role

Artificial Intelligence (AI) has come to dominate almost every aspect of our modern life and it has begun to enter the distinctly human arena of the law. Helping with legal analysis, aiding in fraud detection, and even assessing the risk of re-offending; it seems that algorithms will only serve to improve our justice system. However, there may still be cause for concern. We need to continue to examine how algorithms work, and indeed how we use them before we hand over the gavel.

The importance of this issue

The realm of law is one which, particularly in the UK, is shrouded in traditions and customs: the antithesis to modern machine learning. This, however, seems to be changing.

Companies such as the software vendor Luminance combine supervised and unsupervised machine learning to aid lawyers in legal analysis by sifting through large quantities of information in legal documents and presenting patterns, which the lawyers can then interpret. Over 250 law firms[i] globally are already using this algorithm, and the purported benefits such as streamlined legal service, freeing up lawyers for more important work, could see many more firms adopting this technology.

The relevance of AIs for the present

Data is at the core of algorithms, and it seems that this is itself a controversial issue, considered in a recent case in the Netherlands.[ii] This case dealt with an algorithm used to prevent benefits fraud, which included a wealth of personal information sourced from government data. The court held that this was inconsistent with Article 8 of the European Convention of Human Rights (the right to privacy) and the legislation regulating its use failed to comply with ECHR’s requirement because it was ‘insufficiently clear and verifiable’. It was argued that the Dutch government needed to strike a balance between the right to respect for private life and the relative benefits of using new technology to detect fraud.

There is an obvious connection between this case and the personal data used in test and trace algorithms globally in the fight against COVID-19. However, it may be argued that the relative benefits of the use of the data (contributing to stopping the spread of disease) will outweigh the amount of personal data used. Nevertheless, the use of data and its implications on the ECHR is a nascent issue and has not yet reached the European Court of Justice.

Uncertainties and Risks

The use of algorithms within the decision-making process has an immediate implication for the basic rule of law principle of due process. The ‘black box problem’ is that we do not know how the algorithm reached its decision when using a closed system. This was the grounds for appeal in the Wisconsin case of Loomis[iii] and is arguably in direct conflict with the principle of open justice. This case concerned the predictive algorithm COMPAS, used to assess the potential risks of an individual re-offending, and whether it should be considered in sentencing. The court held that it may be used as a factor in judgements within a more global approach, but that it would be problematic to use it as the only indicator when handing down sentences.

Furthermore, AIs are necessarily shaped by the dataset ingested; the choice of what data to include is made by humans. Rather than being free of the human prejudices that judges may have, AIs can instead be an exaggeration of these implicit biases. The gender data gap[iv] has already pervaded many otherwise successful algorithms, and there may also be a racial data gap.[v]Indeed, in the previously discussed Dutch case, the possibility of inadvertent discrimination through the algorithm combined with an absence of transparency underlined the court’s decision that there was a breach of Article 8 of the ECHR. This was compounded by the AI’s use, as it was employed almost exclusively in low-income and migrant areas. We must solve this problem before we can allow algorithms an even greater role in our justice system.

Legal implications

The Luminance website boasts an 80% reduction on review cost per project. If AIs can reduce the ever-growing time and costs of litigation it may help increase the affordability of legal fees, and therefore facilitate access to justice. If we allow algorithms to dominate judgements, there will inevitably be concerns about the unknown mechanics of their decision-making process. The best approach to employing AIs in the legal process may be to combine the benefits of the efficiency of data analysis that they offer, with the human wisdom and discretion which judges continue to provide. Again, we return to the approach of the court in the Wisconsin case of Loomis; the output of the algorithm must be tempered by, and subject to, the decisions of humans.

References

[i] Ltd, L.T. (n.d.). Luminance. [online] Luminance. Available at: https://www.luminance.com/. [ii] C-09-550982-HA ZA 18-388 [2021] (Court of the Hague) Available at: https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:865. [iii] Loomis Vs Wisconsin [2016] (Supreme Court of Wisconsin) Available at: https://caselaw.findlaw.com/wi-supreme-court/1742124.html. [iv] Fatemi, F. (2020). Bridging The Gender Gap In AI. [online] Forbes. Available at: https://www.forbes.com/sites/falonfatemi/2020/02/17/bridging-the-gender-gap-in-ai/ [Accessed 21 Jan. 2021]. [v] Ferro, S. (2016). [online] consent.yahoo.com. Available at: https://www.huffingtonpost.co.uk/entry/heres-why-facial-recognition-tech-cant-figure-out-black-people_n_56d5c2b1e4b0bf0dab3371eb?ri18n=true [Accessed 21 Jan. 2021].

155 views

Subscribe

Contact