Written By Om Gupta
Published By: Om Gupta | Published: May 28, 2023, 01:08 PM (IST)
Like other forms of artificial intelligence, generative AI learns how to take actions from past data. (Image: Reuters)
ChatGPT has fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus. Also Read: OpenAI Confirms Adult-Only ChatGPT With Custom Personalities And Erotic Conversations
Lawyer Steven A Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research, reports The New York Times. Also Read: UPI Meets ChatGPT: India Tests AI-Powered E-Commerce Payments Via OpenAI Partnership
After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”. Also Read: OpenAI Bans Multiple ChatGPT Accounts: Know The Whole Reason
The judge has now set up a hearing as he considers sanctions for the plaintiff’s lawyers.
According to Schwartz, he did ask the chatbot if it was lying.
When the lawyer asked for a source, ChatGPT went on to apologise for earlier confusion and insisted the case was real.
ChatGPT also maintained that the other cases it cited were all real.
Schwartz said he was “unaware of the possibility that its content could be false.”
He “greatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.
Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.
“ChatGPT recently issued a false story accusing me of sexually assaulting students,” Turkey posted in a tweet.