A stern warning echoed through the legal halls of England and Wales this week as a top judge sounded the alarm on the unchecked use of artificial intelligence in legal work.
Judge Victoria Sharp delivered a clear message, stating that popular language tools like ChatGPT might present arguments that look convincing at first glance, but often those arguments are simply not true.
The ruling came after a review of two striking cases that laid bare just how easily even seasoned lawyers can stumble when they rely too heavily on these automated aids. In one case, a legal filing aimed at two banks was loaded with citations said to support the client’s claims. Eighteen of those citations pointed to cases that, in reality, did not exist. Many of the others, Judge Sharp found, quoted lines that were never said or drew conclusions the cited cases did not support at all.
In a separate case involving a dispute over a London eviction, another lawyer submitted a court document with five case references that no one could find in any legal database. That lawyer insisted she did not use any artificial tools but admitted her research might have touched on summaries generated by Google or Safari, which themselves could have been influenced by AI.
Sharp explained that the growing wave of these incidents highlights a serious problem and a need for stricter oversight. Her ruling will now be sent to professional governing bodies such as the Bar Council and the Law Society in an effort to tighten up standards, as outlined by the judicial conduct disciplinary process.
Importantly, the judge clarified that while technology itself is not banned in legal research, professionals must always verify every fact and citation against reliable sources before bringing it into the courtroom.
Lawyers who fall short of these responsibilities, she warned, put themselves at risk of sanctions ranging from public rebuke to more severe consequences like being reported to the police.
Neither of the lawyers at the center of these recent cases will face contempt proceedings this time, but the judge was clear this was not a trend—future failures could yield tougher penalties.
Both attorneys in question are now under review by regulatory bodies, as the legal profession grapples with the double-edged sword of fake ai-generated cases in legal filings.