Two lawyers expressed their repentance before an irate judge of a federal court in Manhattan and they blamed on Thursday ChatGPT for misleading them into including a fictitious legal inquiry in a document filed with the court.
Lawyers Steven A. Schwartz and Peter LoDuca could be penalized for a document they included in a lawsuit against an airline, which referenced previous court cases that Schwartz thought were true, but were actually made up by the AI-powered chatbot. .
Schwartz explained that he used the innovative program to seek legal precedents to support a client’s case against Colombian airline Avianca for an injury suffered during a flight in 2019.
The chatbot, which has fascinated the world with its essay-like responses to user requests, suggested a number of aviation incident cases that Schwartz had been unable to find through the usual search methods used by his law firm.
The problem was that several of those cases either never happened or involved airlines that didn’t exist.
Schwartz told Judge P. Kevin Castel that “acted on the misconception…that this website was getting cases from some source I didn’t have access to.”
Schwartz said that “failed miserably” in doing the respective follow-up investigation to ensure that the references were correct. “I didn’t understand that ChatGPT could invent cases”he added.
Microsoft has invested around $1 billion in OpenAI, the company behind ChatGPT.
The success of ChatGPT, which shows that artificial intelligence could change the way humans act and learn, has raised fears among some.
Hundreds of industry leaders signed a letter in May warning that “Reducing the risk of extinction from AI should be a global priority on par with other risks on a societal scale, such as pandemics and nuclear war.”
Judge Castel seemed both puzzled and upset by the unusual incident, and disappointed that the lawyers had not moved quickly to correct the false legal references when first brought to their attention by their Avianca counterparts and the court. Avianca exposed the false jurisprudence in a document filed in March with the court.
The judge confronted Schwartz with a legal case made up by the computer program. The matter was initially described as a wrongful death case brought by a woman against an airline, but morphed into a lawsuit involving a man who missed a flight to New York and incurred additional expenses.
“Can we agree that this is legal nonsense?” Castle asked.
Schwartz said he mistakenly believed that the confusing presentation had resulted from extracts obtained from different parts of the case. When Castel was done with his questioning, he asked Schwartz if he had anything else to add.
“I want to sincerely apologize” Schwartz stated.
The lawyer said that he had suffered personally and professionally from this blunder and that he felt “embarrassed, humiliated and extremely sorry.”
He claimed that he and the firm where he worked—Levidow, Levidow & Oberman—had put in place safeguards to ensure something similar never happened again.
LoDuca, the other attorney working on the case, said he trusted Schwartz and failed to properly review what his partner had compiled.
After the judge read parts of one of the mentioned cases aloud to show how easy it was to discern that it was inconsistencies, LoDuca said: “I never thought it was a fake case”.
LoDuca affirmed that the result “I am extremely sorry.”
Ronald Minkoff, a lawyer for the law firm, told the judge that the delivery of the document “it was due to an oversight, not bad faith”and that should not lead to sanctions.
He noted that lawyers have historically struggled with technology, particularly modern technology, “And it’s not getting any easier.”
“Mr. Schwartz, who does very little federal research, decided to use this new technology. He thought that he was using an ordinary search engine ”Minkoff said. “What I was doing was playing with live ammunition.”
Daniel Shin, an adjunct professor and deputy director of research at the Center for Legal and Judicial Technology at William & Mary School of Law, said he made the Avianca case during a conference last week that drew dozens of in-person and online participants from state and federal courts in the United States, including the federal courthouse in Manhattan.
He said the issue caused shock and bewilderment during the conference.
“We are talking about the Southern District of New York, the federal district that handles big cases, from September 11, 2001 to big financial crimes”Shin said. “This was the first documented case of possible professional misconduct by a lawyer using generative AI.”
He said the case showed that lawyers might not understand how ChatGPT works because it tends to hallucinate and talk about fictional things in such a way that they seem real but aren’t.
“It highlights the dangers of using promising AI technologies without being clear about the risks”Shin commented. The judge said he will rule on the sanctions at a later date.
Source: AP
Source: Gestion

Ricardo is a renowned author and journalist, known for his exceptional writing on top-news stories. He currently works as a writer at the 247 News Agency, where he is known for his ability to deliver breaking news and insightful analysis on the most pressing issues of the day.