Issue #2 - Law firms' morale at an all-time high now that they can use AI to generate evil plans to charge customers more money

March 3, 2023
Splendid Edition
In This Issue

  • DoNotPay successfully uses AI to automatically negotiate a bill discount with a human customer service rep
  • They also try to use ChatGPT in a US Supreme Court case. It turns out it’s not safe for your health.
  • Top UK law firm starts using Harvey AI to assist 3,500 lawyers with mergers & acquisitions. No big deal.
  • Meanwhile, another top UK law firm has started hiring “prompt engineers”
  • Robin AI uses AI (duh) to help lawyers generate drafts and review legal contracts
  • Academics test large language models to work both as lobbyists and fiduciaries.
  • Two courts in Colombia used ChatGPT to reach a verdict. It’s ok to have a bad feeling about this
  • AI is good at translating. What if it starts translating legalese for the average citizen?
Intro

Of all the industries that you would expect to adopt artificial intelligence quickly, the Legal one is the least likely.

There is an almost infinite number of legal issues related to all forms of AI that have not been resolved and that will likely drag on for a decade or more. And yet, here we are, this week, reviewing how top lawyers are rushing to adopt the latest and greatest large language models (the AI models that power systems like ChatGPT) on the market, impacting jobs in the industry in unsuspected ways.

Alessandro

What's AI Doing for Companies Like Mine?

This is where we take a deeper look at how artificial intelligence is impacting the way we work across different industries: Education, Health Care, Finance, Legal, Manufacturing, Media & Entertainment, Retail, Tech, etc.

What we talk about here is not about what it could be, but about what is happening today.

Every organization adopting AI that is mentioned in this section is recorded in the AI Adoption Tracker.

To understand how AI is infiltrating and changing the Legal industry we need to start from an app called DoNotPay.

The company behind it launched in 2015 with a unique mission: automatically sort out minor annoyances in the daily life of the users like contest parking tickets or utility bills, cancel free trials, etc.
You pay a subscription fee ($36 every three months in the US, £36 every two months in the UK) and you can sue as many people you want as many times as you want. Really.

I tried it many years ago, as it became available in the UK and, at the time, it was just £2 / month. It wasn’t very impressive either: the typical, frustrating chatbot that is supposed to help you with a problem but instead makes you desire to smash the computer against the wall. I can’t say it was helpful in my particular case.

Nonetheless, DoNotPay claims to have helped over 300,000 people over these eight years, and the company now is valued at a very respectable $210 million.

Over time, the company also expanded its capability and it’s now a “robo-lawyer” that offers an impressive range of services, including automatically applying for asylum in the US, UK, and Canada for free. And, of course, as technology made progress, DoNotPay adopted the latest and greatest AI models available on the market.

In December 2022, the founder Joshua Browder, showed on Twitter how DoNotPay used ChatGPT to negotiate a $10 / month discount with a (human) Comcast customer service representative on behalf of the customer. The customer service representative didn”t notice that he was talking to the AI.

So far, so good. An impressive demonstration that would not have been possible with the antiquated AI systems DoNotPay was using before adopting ChatGPT. But not even remotely impressive enough to turn heads in the Legal industry.

Then, Browder announced this:

That, yes. That turned quite a lot of heads in the Legal industry.

This wonderful idea is, in fact, illegal in a lot of places. In January, Matthew Sparkes, a technology reporter for New Scientist, wrote:

Using a smartphone or computer connected to an in-ear device in court would be illegal in most countries, but DoNotPay has found a location where this set-up can be classed as a hearing aid and therefore allowed, says Browder. “It’s technically within the rules, but I don’t think it’s in the spirit of the rules,” he says.

Neil Brown at UK law firm decoded.legal says that using recording equipment in a UK court would breach the Contempt of Court Act 1981, and that courts may interpret this AI system as falling foul of that rule.

Given this, as you can imagine, not very many lawyers felt compelled to be DoNotPay’s guinea pig, and turned down a $1 million offer to use the AirPods in a US Supreme Court case and repeat exactly what the AI would tell them to say.

Eventually, Browder gave up after State Bar prosecutors threatened to put him in jail for 6 months if he tried to use ChatGPT as he disclosed.

Shame.

With this in mind, you’d think that lawyers would never allow AI to enter their world and threaten their way of life. Right?

Wrong. And that’s where things start to get interesting.

A couple of weeks ago, in a surprising turn of events, the second biggest and most important law firm in the UK, Allen & Overy, announced the adoption of an AI like ChatGPT provided by the startup Harvey AI.

We are talking about a historical, international law firm with 43 offices and almost £2 billion in revenue. Not Erin Brockovich. So everybody is paying huge attention.

It turns out that Allen & Overy has been testing the LLM provided by Harvey AI since Nov 2022, and this AI has helped 3,500 lawyers in over 40,000 interactions, including drafting documents for mergers and acquisitions in multiple languages.

What kind of questions can Harvey AI answer? Kyle Wiggers, a senior reporter at TechCrunch, writes:

Harvey can answer questions asked in natural language like “Tell me what the differences are between an employee and independent contractor in the Fourth Circuit” and “Tell me if this clause in a lease is in violation of California law, and if so, rewrite it so it is no longer in violation.

How much do these lawyers use Harvey AI? Chris Stokel-Walker, a Wired contributor, has the answer:

According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more.

As you might have read in the Free Edition of this week’s newsletter, Harvey AI is one of the startups that has received an injection of capital from OpenAI. As part of the deal, the startup gets to use the newest AI models developed by OpenAI before anybody else on the market.

So it’s entirely possible that Harvey AI is using the much-awaited GPT-4, fine-tuned (it means “further educated”) on the trove of legal documents that Allen & Overy owns.

This has big implications for the job market in the Legal industry:

First: law firms following the path of Allen & Overy will have to secure completely new types of skills.

For example, another top law firm in the UK, Mishcon de Reya, is looking for a prompt engineer:

I know that, after reading last week’s issue of Synthetic Work (When I grow up, I want to be a Prompt Engineer and Librarian), you thought that this was an esoteric job available only at Silicon Valley startups that fill swimming pools with cash, but no.

Second: The paralegal profession might be further weakened by artificial intelligence.

In large and acclaimed law firms, AI will help paralegals in the same way automation helps IT workers: do more in the same unit of time. But smaller law firms might end up reducing the total number of employed paralegals as soon as the price of these AI services will go down due to the economy of scale and competitive pressure.

In fact, Harvey AI is not the only game in town. There’s another startup, called Anthropic. Anthropic was founded by former OpenAI engineers and it has recently received an investment of $300 million from Google (if you are not familiar with venture capital and private equity deal sizes, this is a big, big investment).

Like OpenAI, Anthropic develops AI models that other startups use to build their products. An entire supply chain developed around artificial intelligence innovation.

The latest AI model developed by Anthropic is called Claude, and it’s the archenemy of ChatGPT. Nobody has seen it in public, but a startup called Robin AI is using it.

If you are confused by all these names, just remember one: Alessandro AI. Yeah. Why can’t I have one, too?

Anyway.

Robin AI targets the Legal industry and their first service drafts and reviews legal contracts:

Also keep in mind that, at the moment, we are seeing AI models with exponential capabilities every six months or so. Startups like Harvey AI and Robin AI will replace their GPT-4 model with GPT-5, and then GPT-6, and so on, in a transparent way for the users. And the law firms will get AIs increasingly capable of replacing paralegals.

Third: Both AI startups and law firms will want to explore more applications.

John J. Nay, an AI researcher, published two long and boring intriguing academic papers that got much attention:

In the first paper, John asks an AI model like the one that powers ChatGPT to determine if proposed U.S. Congressional bills are relevant to specific public companies and to provide explanations and confidence levels. For those bills that the deemed relevant, the AI is asked to draft a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation.

In the second paper, John tests how well the latest generation of LLMs understands legal standards, reporting a particularly good capability in understanding the concept of fiduciary obligations.

Notice that the word understanding is in italics. Large language models understand nothing of nothing, and they never will if the underlying AI techniques don’t change. By understanding we simply mean that the AI is getting increasingly good at manipulating texts that contain complex logical concepts like fiduciary obligations.

You might think “Yeah, OK. These are just academic papers. It will be a decade before somebody implements these ideas, if ever”. But the AI community is not like that at all.

These guys never sleep.

New artificial intelligence methodologies published in academic papers sometimes get implemented in 48 hours in proof of concepts or minimum viable products that get immediately exposed to the entire world. We got to a point where I go to sleep at night and I don’t know what to expect the morning after. It’s an unsettling feeling.

As we speak, there are hundreds of startup founders that are testing John’s findings.

And, if you think that all of this is too far-fetched, I have a final bomb to drop for you.

On the 30th of January 2023, in Colombia, a court in Cartagena officially resorted to ChatGPT for a judicial decision. 29% of the 7-page ruling consists of text generated by ChatGPT.

In that country, there’s a law that allows the court to use AI.

Just ten days later, another Colombian court does the same identical thing. If you speak Spanish, you can watch the whole case debated in a virtual court inside Meta’s interpretation of the metaverse:

As you can see, it’s not the metaverse at all. As I always say, all Meta has done is putting a Wii inside a motorcycle helmet. We are digressing, but I’m glad we agree.

The details of how the judges have used ChatGPT in both cases are offered by Juan David Gutiérrez, an associate professor at the Universidad del Rosario in Colombia, and are really worth your while.

Even if the use of AI in these two cases was irresponsible and uneducated, the bottom line is that we are witnessing a tectonic shift in the Legal industry, at all levels, because of how AI is being adopted. And, again: these are early, rudimentary AI models. Much will change very soon.

Now.  Before we finish this long conversation about AI in the Legal industry, we need to close the loop that started with DoNotPay at the beginning of this newsletter.

There are some dark questions worth asking. Questions that well-paid lawyers might want to start thinking about.

You see, LLMs are phenomenally good at translating a language into another language. But that goes much beyond translating, for example, from Italian to English. LLMs are also phenomenally good at translating from English to programming languages, like Python, Java, C, etc.

Software developers all around the world have been using an AI software called Copilot since June 2021, and in less than two years, the company that makes it (GitHub, a subsidiary of Microsoft) reports that, on average, 46% of the code written by a developer is generated by Copilot.

That good.

LLMs are starting to become good at translating from English to the music language. A team in Google Research recently announced a new AI system that generates short classical music pieces starting from a description like this:

Meditative song, calming and soothing, with flutes and guitars. The music is slow, with a focus on creating a sense of peace and tranquility.

It’s called MusicML and, while it’s just a proof of concept for now, it’s mindblowing.

Here’s where I’m going with this:

A number of disciplines in our economy (Medicine, Law, Accounting, Finance, etc.) rely on complex, often impenetrable jargon, preventing most people from using a wide range of services, or participating in the economy, without years of education or the very expensive assistance of a professional.

The more impenetrable the jargon and the rules, the more expensive the professional service.

But what happens if, all of a sudden, a LLM can translate what a person without years of Law education says in legally-appropriate terms? And what happens if the same LLM can translate complex legal documents into a language that even a person without years of Law education can understand?

What happens to the job of the lawyer if that LLM is given away for free to the world’s population, maybe because it’s open source, or it’s offered as a service at a microscopic fraction of the cost of a lawyer today?

Jeff Bezos, the founder of Amazon, famously said:

Your margin is my opportunity.