
It is the first appellate case in the State. The Appellate Division, Third Department, excoriated an attorney for the misuse of generative AI in legal papers. In five filings during an appeal, the attorney included “no less than 23 fabricated cases, as well as many other blatant misrepresentations of fact and law from actual cases.” In all, he was fined $7,500 for conduct ruled inappropriate, in Deutsche Bank National Trust Co. v. LeTennier, decided on January 8, 2026.
It all started as a garden-variety real estate foreclosure action on property in upstate New York. The defendant, Jean LeTennier, originally acting as attorney pro se, was unsuccessful in the trial court, including a number of post-judgment motions. At that point the attorney stepped in to represent the defendant in the appeal. The Court notes that there are distinct legal grounds for setting aside a foreclosure, if the facts and law are followed. But the attorney relied upon generative artificial intelligence to do his legal research, and did not check to see if the case precedents generated by AI actually existed, or actually stood for the proposition assigned to them by AI. Most were non-existent.
“Hallucinated cases” may look like real cases, “but their citations lead to cases with different names, in different courts and on different topics – or even to no case at all.” The attorney using AI as an assistant has an absolute obligation to check, and read, every case he cites, whether generated by AI, or by genuine, old-fashioned legal research. That didn’t happen here, to the Court’s consternation.
According to the Court, defendant submitted at least 23 fabricated legal authorities (“AI hallucinations”) across five filings during the pendency of the appeal. The attorney acknowledged that the papers were actually his own and that by signing a paper, an attorney or party certifies that the papers and their presentation or the contentions are not frivolous. In his own defense, the attorney argued that 90% of his citations were accurate! That alone was “simply unacceptable by any measure of candor to any court.”
The Court noted that attorneys and litigants are not prohibited from using AI to assist with the preparation of court filings. But they must not rely upon the technologies if they are not trained on their dangers and without human oversight. The attorney has an obligation to fact check and cite check every document filed with a court.
Perhaps most offensive to the Court was the attorney’s attitude on oral argument. “Rather than taking remedial measures or expressing remorse, defense counsel essentially doubled down . . . on his reliance of fake legal authorities as not ‘germane’ to the appeal.”
Added to all of that, the Court found that the attorney and his client had pursued a baseless appeal. And for all of that, the attorney was fined $7,500, and his client, $2,500, both as a punishment and a deterrence to others.
Courts throughout the country are facing these problems and sanctioning attorneys. Federal district courts in New York and New Jersey have imposed monetary sanctions under Federal Rule 11 against attorneys who have cited to generative AI-produced fake cases. The New Jersey Federal District Court put it this way last September in OTG N.Y., Inc. v. Ottogi Am., Inc.:
Unfortunately, attorneys’ use of generative AI without proper oversight has become a prevalent issue for courts across the country. Examples abound where district courts have found legal briefs containing AI hallucinated case law masked as actual legal authority. In other words, AI can generate a legal proposition that appears to come from a judicial opinion and is even properly cited under The Bluebook Uniform System of Citation; but there is just one problem – it’s not real. AI made it up. And those who rely on AI blindly, do so at their own peril.
We hope these stern warnings and stiff penalties will help to stem this unacceptable practice in our courts.


