**Having more than 30 years of experience in the profession, but lawyer Steven Schwartz is about to lose his career because of one foolish mistake of trusting ChatGPT without verifying the information.** A man named Roberto Mata has filed a petition. sued the American airline Avianca for injuring his knee when he was hit by a shopping cart at New York’s Kennedy International Airport. The lawsuit was filed in Manhattan federal court and the initial trial went very smoothly. The airline has hired a lawyer to defend the case and thinks that the case has expired. In contrast, the plaintiff’s attorney, Steven Schwartz, disagreed with the defendant’s argument. He cited by submitting 10 pages of documents summarizing 8 court cases with similar cases. The problem, however, is that no one can find the court’s verdict or any excerpts from the 10-page document. As it turned out, the document summarizing the criminal records related to the lawsuit was all created by ChatGPT itself. Realizing his mistake, the lawyer said he “looked forward to the leniency of the court” and admitted to using AI – “an unreliable source” to search for legal documents. ## AI’s involvement in legal documents According to the *New York Times*, the lawsuit filed by plaintiff Roberto Mata and defendant Avianca airline started from flight 670 from El Salvador to New York today. August 27, 2019. At that time, an airport employee pushing a cargo cart collided with Mata, injuring him. After Mata filed a lawsuit in court, the airline appealed and asserted that the case was void. By the March trial, Mata’s lawyer had said that the plaintiff could still file a lawsuit and cite evidence from the many rulings of previous cases related to it. Soon after, the airline’s lawyers countered that they could not find the cases mentioned in the plaintiff’s summary document. “We are not able to determine when or where the cases mentioned and cited in the document occurred or any similar cases,” the lawyer said. Judge Castel asked Mata’s attorney for a full copy of the summary of previous cases. The lawyer submitted an abstract of 8 cases including information about the court, the judge who heard the case, the code number, and the date of the case. However, lawyers for Avianca continued to hit back by saying that they could not find citations, case codes or legal data of 6/8 cases based on this information. Therefore, Avianca’s lawyer Bart Banino asserts that these cases are completely bogus and suspects that AI chatbots are involved. Attorney Schwartz later admitted to consulting ChatGPT to “supplement” his work and in the process, he made the mistake of believing, citing six bogus cases in his work. your reference document. Schwartz told Manhattan U.S. District Court Judge Kevin Castel that he had no intention of fabricating facts to defend his lawsuit against Avianca. This is also the first time he has used ChatGPT in a lawsuit, so he is “not aware of the risk that it will fake information”. He also asked ChatGPT again to confirm whether the cases it mentioned in the document were real. At the time, OpenAI’s chatbot confirmed yes. “Are the cases you bring up fake?” Schwartz asked. ChatGPT quickly replied: “Not only the cases I brought up are real and can be found in the legal database”. ## For the first time encountering AI forging legal documents Speaking at the trial, lawyer Schwartz expressed his deep regret for depending on ChatGPT. He pledged never to repeat the offense and bypassed such verification in the future. As for the court, Judge Castel said that this is the first time he has encountered a situation where every document about the judgment submitted is false, even the law citations and reference cases are forged. . He asked to hold a hearing on June 8 to make a final decision. According to *New York Times*, when artificial intelligence is gradually invading the online environment, many people are concerned about a future where computers will replace not only social interactions but also physical labor. People. Especially in the field of knowledge workers, they think that their daily work is not properly appreciated but only interested in performance and profit. Stephen Gillers, a professor of legal ethics at the New York University School of Law, says this is a prominent issue in the legal world. They have repeatedly argued about the benefits and risks that AI software like ChatGPT brings.