Close menu

With artificial intelligence become more prevalent, it begs the questions, what are the hidden dangers of using AI for legal advice?

AI is becoming increasingly sophisticated, and finding itself more in the public eye within recent years, there is a temptation to use it as a tool to help make life easier for people.

For instance, why bother paying expensive and time-consuming solicitors to draft your claim when you can simply enter the necessary information into ChatGPT or similar AI tools, and they can draft it for you speedily and free of charge? One lawyer in America discovered the perils of doing this to his dismay.

Mata v Avianca – Steven Schwartz’s downfall

In August 2019, Mr Mata injured his knee when a metal cart on his Avianca flight struck him. In bringing his lawsuit, Mata chose to instruct Mr. Steven Schwartz, a New York-based lawyer with 30 years of experience behind him. Despite Avianca claiming the case was out of time and should not be allowed to proceed, Mr Schwartz managed to present the court with a 10-page document in support of the claim proceeding. This document cited numerous cases including Martinez v Delta Airlines, Zicherman v Korean Air Lines, and Varghese v China Southern Airlines, which seemingly supported his argument wonderfully.

The problem? None of these cases actually existed.

Mr. Schwartz had not actually written this document himself, and instead, inputted what was required into ChatGPT, which then proceeded to produce completely falsified case examples, upon which the lawyer relied.

It was not until the judge requested copies of these judgements, and the legal team’s search for them turned up fruitless, that the truth behind their non-existence was discovered.

Dangers of using AI for legal work

Clearly, one of the dangers that has already come into fruition (much to Mr Schwartz’s embarrassment), is that AI can and will simply fabricate cases and precedents upon which a party may then, wrongly, rely.

However, this is by no means the only danger faced when making use of AI in this way. Other dangers include:

Misunderstanding

Even if AI did provide accurate legal advice to the user, the chances are that, without the support of a legally-trained adviser, the advice given may be completely misused and misunderstood, thereby not supporting the case at all, and dooming it to fail.

Data Protection Breaches

With AI being so new to the public, there are very few laws surrounding its use – where personal information is protected, this protection is not afforded when the information is input into an AI database, and thus may result in data breaches.

Lack of quality

AI is still very new and, as with any new technology, there are bound to be problems and issues which need to be resolved - for clear, correct and quality advice, instructing a lawyer will always be the better option.

How we can help you to avoid the dangers

We are a team of highly-qualified and efficient lawyers, who are able to provide you or your company with high-level, high-quality legal advice when you need it. We ensure that all advice is provided are accurate and legally compliant. We can also assist with drafting AI policies so that  your staff are aware of boundaries of using AI in the workplace. Complete the contact form below and one of our employment law specialists will be in touch.

Key Contact

Debbie Coyne

Debbie Coyne

Employment Law Senior Associate Solicitor


Debbie is a Senior Associate in the Employment team who regularly attends our offices in Altrincham, Warrington and Chester.  She is recommended in The Legal 500 and has been named as a Rising Star.

arrow icon

Latest News

Solar On Farmland Min

Planning Applications for Solar Schemes on Farmland: Things to Consider

11 April 2024

Read more
The Effectiveness Of Non Contest Clauses

To Claim, or Not to Claim: The Effectiveness of Non-Contest Clauses

18 March 2024

Read more