The two lawyers who submitted fake legal research generated by A.I. chatbot ChatGPT just got hit with a $5,000 fine and a scolding by a federal judge. The lawyers submitted a legal brief on an airline injury case in May that turned out to be riddled with citations from nonexistent cases. The attorneys, Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, initially defended their research even after opposing counsel pointed out that it was fake, but eventually apologized to the court. 

Schwartz, who created the ChatGPT-generated brief, already had a court hearing on June 8 in which he explained his actions. At the hearing, he said he didnt know that ChatGPT could fabricate legal precedents, and added that he was humiliated and remorseful. 

I heard about this new site, which I falsely assumed was, like, a super search engine, Schwartz said.

On Friday, U.S. District Judge Kevin Castel, who presided over the case in Manhattan, filed a sanctions order against Schwartz and LoDuca that said fake legal opinions waste time and money, damage law professionals reputation, and deprive the client of authentic legal help.

Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance, the sanctions read. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. 

The order continued by saying that the attorneys and firm abandoned their responsibilities when they submitted the 10-page brief rife with nonexistent quotes and citations.

The sanctions also reprimanded the lawyers for standing by their research and not admitting the truth for over two months from March to May, even after the court and opposing counsel called their evidence into question.

The judge issued a $5,000 fine to Schwartz and LoDuca as a deterrence and not as punishment or compensation. The order is careful to note that using A.I. should not be prohibited, because good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis, and A.I. is the newest addition to this toolkit. But the judge emphasizes that all A.I.-assisted or generated filings have to be checked for accuracy. 

The Schwartz and LoDuca incident comes after a Goldman Sachs report in March said A.I. could automate 44% of all legal work. Another March report by researchers from Princeton University, New York University, and University of Pennsylvania found that the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments.

Much of legal work involves researching past cases and precedents, reviewing contracts, and drafting documents, all which ChatGPT can do exponentially faster than a human. However, Fridays sanctions underscore that A.I. is still inaccurate and prone to hallucinations, or fabricating information. 

The thing that I try to caution people the most is what we call the hallucinations problem, Sam Altman, CEO of ChatGPTs maker, OpenAI, told ABC News in March, soon after Schwartz created his brief. The model will confidently state things as if they were facts that are entirely made up.

Levidow, Levidow & Oberman did not immediately respond to Fortunes request for comment. The judge separately dismissed the original case on the grounds that it was untimely.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY