Bots, Bytes and Barristers:
How AI Might Beat Us at Our Own Game

Artificial Intelligence (AI), is an increasingly relevant (some may say dogged) topic across the spectrum of popular and professional avenues. With AI seeming to be in a state of constant breakthroughs, many questions, concerns, and fears arise.
Unsurprisingly, the legal industry has become a frothy frontier for this emerging technology, and questions of the impact of AI on the legal world including if (yes), and how (cautiously enthusiastic) it should be applied.
We looked at one of the many recent publications on the topic: A Lawyer’s Guide to Artificial Intelligence (ALGAI). This guide is published by Clio, a legal practice software company. To put ALGAI in context, on October 9, 2023, Clio announced to much fanfare its soon-to-be-released AI offering, Duo.
ALGAI’s softly-worded tentative suggestions reflect the general mood of the legal community toward AI. ALGAI takes a suitably optimistic-but-conservative and ultimately pragmatic point of view. We understand that AI can be an excellent tool for streamlining and expediting certain tasks that could take a human much longer to complete. In a law firm, “…advanced algorithms [could] swiftly review vast amounts of legal documents, identifying patterns and extracting relevant information faster than any human could.” (ALGAI)
This use seems innocuous enough; AI being used as a tool to make something go faster, like using an electric can opener instead of a hand-cranked one.
However, that’s not where suggested use of AI stops. AI could also be used to do more advanced tasks such as “…legal research, due diligence, and contract analysis.” (ALGAI) That may still sound practical and unproblematic; now you have an electric can opener that maybe knows how many cans you have in your pantry, what’s in each can, and will reach down a can to open it for you when your recipe requires.
Then the guide takes a gentle, non-committal turn toward the character on everyone’s mind, the robot lawyer. “AI could also help predict legal outcomes based on historical data, providing valuable insights for case strategy.” (ALGAI)
So at this point your electric can opener also knows how many cans have been opened, what was in those cans, and can predict which cans on various shelves in various pantries within a vicinity need to be opened in a given moment.
So far, all of this may seem convenient, intuitive, helpful. Except. A known swampy part of the AI landscape is accuracy. “AI tools (especially more general use tools that have not been tuned to a particular purpose or area of expertise) have a consistent problem with accuracy.” (ALGAI)
“Problem with accuracy” is an understatement because ChatGPT is “able to produce artificial hallucinations”.
We are strongly cautioned against relying on an AI tool without doing a proper fact-checking. In other words, don’t feed the contents of the can to anybody based on what the AI has told you is in the cans. Taste what’s in there before dumping the contents into any meal plan.
The case Mata v. Avianca, Inc., (2023) United States District Court, S.D. New York, June 22, 2023 is now a global precedent not to rely on ChatGPT to write your submissions. District Judge Kevin Castel found that lawyers Steven Schwartz and Peter LoDuca “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
We note parenthetically when a lawyer in our own law firm ran a few tests for fun in the early days of ChatGPT, we had the same outcome: the test brief relied on cases that don’t exist. Yet eerily the test brief with the imaginary cases was presented to us cloaked with a high level of artificial confidence.
Thanks to the Mata case, lawyers around the world need not experience the mortification of having to tell the court we were “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.”
The lawyers had served up a dish to which they thought they had added a reliable can of plum tomatoes, but which turned out to be a can of horse feathers. Ergo, “another best practice is to treat AI outputs as a first draft and double check what’s provided.” (ALGAI)
However, random hallucinations are just the beginning of the potential problems with AI. Some people in the know fear “job loss, misinformation, human extinction” (Levy). Elon Musk is famously obsessed with the “human extinction” aspect, and believes that the “moment when artificial intelligence could forge ahead on its own at an uncontrollable pace and leave us mere humans behind” […] “could happen sooner than we expected.” (Isaacson)
The type of AI that might become sentient and hostile to humans is Artificial General Intelligence (AGI), and is not on the market … yet.
You took your eyes off your highly efficient electric can opener and now it has opened the cupboard, thrown out all the cans, opened the front door, kicked you out, changed the locks, and opened a bank account using your savings.
In her (highly recommended) video “What Ethical AI Really Means” Abigail Thorn, Ethics Consultant (and Techno Nun), points out numerous serious problems with regular AI, including accuracy problems and bias. She outlines some of the proposed methods of managing these problems. Each of these proposed solutions leads to ethical dilemmas that will need to be addressed. We prefer that the problems are addressed as recommended by Ms. Thorn and we will trust readers to watch her video to find out what those very sensible recommendations are.
Ms. Thorn points out another very big problem at the literal roots of the AI industry, namely, a problem of injustice. Isn’t it ironic that lawyers are clamouring to stake a claim in a structure that in its current state defies fundamental justice?
To illustrate, Ms. Thorn tells a true personal experience where someone used AI to make non-consensual pornography of her. Ms. Thorn describes in detail the time, effort, skill and money that went into making the video, which a random person then screenshotted, flattening all that painstaking care and work, to produced the “shoddy porn” which the person then released as their own.
The point Ms. Thorn makes with this illustration is that the person who stole her likeness “assumed it was just data that they could use”. This data-flattening process is how ChatGPT and DALL-E work. They scrape the internet for “training data” (no consent is asked or given).
Ms. Thorn’s point is that training data is not just data, it is work and art made by the labour of people. However, using this “scrape and spit” model, content is generated cheaply and quickly. Some artists, including Sarah Silverman, are taking legal action against Open AI for ingesting their work to train its AI models.
A further injustice arises in the context of infrastructure. “If you want AI, you need electricity, cables, power plants and lithium. Lithium is mined and refined and transported. It is turned into components, which are also transported. Every step uses non-renewable resources and emits CO2. Mining causes damage to people and the environment. Damage can be expressed in money. The cost to the miner’s health is born by the miners. The cost to the environment is born by everyone. The idea that the tech sector is clean is a lie.” (Thorn)
Then there is the ugly injustice associated with data-labelling. It takes a lot of people to label the huge volumes of data required to train AI. This work is done mostly by people on the margins, doing hard work for small pay, and this labour force is, ironically, increasingly overseen by algorithms.
Phil Jones describes data-labelling as “sub-employment”, referring to work that is done by “globally dispersed complex of refugees, slum dwellers, and casualties of occupations, compelled through immiseration, or else law.” The Matrix plot should be popping into your mind about now.
Sure, your friendly can-opener opens cans faster that you ever could. Indications are clear your can-opener will learn to do much more than open cans. According to dire predictions, it will ultimately become practically a shapeshifter, at which point you will be standing there uselessly, holding the handles of a force that now has a mind of its own and wants to enslave you.
But presuming Cannie never goes HAL 9000, and remains safely under the control of its tech-bro overlords, where do you and Cannie stand?
You and Cannie will soon become enmeshed or even fused as it does more and more of your mundane work and even some of your smart work. Soon you can’t live without Cannie’s fast and docile powers and intelligence. And Cannie’s power and intelligence is the labour and attention of a vast underpaid human labour swamp, who toil hidden in the dirty underbelly of the glamorous AI industry.
—
Ambrogi, Robert. “Clio Goes All Out with Major Product Announcements, Including A Personal Injury Add-On, E-Filing, and (Of Course) Generative AI”, Lawsites, October 9, 2023. https://www.lawnext.com/2023/10/clio-goes-all-out-with-major-product-announcements-including-a-personal-injury-add-on-e-filing-and-of-course-generative-ai.html
A Lawyer’s Guide to Artificial Intelligence © 2023 Themis Solutions Inc. https://www.clio.com/guides/ai-guide-for-lawyers/
MATA v. AVIANCA INC (2023), United States District Court, S.D. New York. 22-cv-1461 (PKC), Decided: June 22, 2023. https://caselaw.findlaw.com/court/us-dis-crt-sd-new-yor/2335142.html#:~:text=1.,Kennedy%20Airport.
Cerullo, Megan. “Ai-Powered “Robot” Lawyer Won’t Argue in Court After Jail Threats”, CBS News, January 26, 2023. https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/
Alkaiss, Hussam and McFarlane, Samy I. “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing”, National Library of Medicine, February 19, 2023. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939079/
Levy, Steven. “Transformers”, Wired, October 2023.
Isaacson, Walter. “The Control Key”, Time, October 23, 2023.
Thorn, Abigail. “What Ethical AI Really Means”, Youtube, posted October 13, 2023 https://youtu.be/AaU6tI2pb3M?si=EwjUhf9Nk1UbKEFg Accessed October 6, 2023 at https://nebula.tv/videos/philosophytube-what-ethical-ai-really-means
Pardo, Melissa. “How to reduce bias in your training data”, Appen, September 15, 2022. https://appen.com/blog/ethical-data-for-the-ai-lifecycle-data-preparation/
O’Brien, Matt. “Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books”, APNews, July 12, 2023. https://apnews.com/article/sarah-silverman-suing-chatgpt-openai-ai-8927025139a8151e26053249d1aeec20
Ricaut, Jimmy. “Data labeling industry / When humans feed the machine”, Medium, November 9, 2020. https://ricaut.medium.com/data-labeling-industry-1543beb82586
Jones, Phil. “Refugees help power machine learning advances at Microsoft, Facebook, and Amazon”, Rest of World, September 22, 2021. https://restofworld.org/2021/refugees-machine-learning-big-tech/