Lawyer Sanctioned for Filing Hallucinated Cases Based on Artificial Intelligence Output

United States District Judge Kai N. Scott, sitting in the Eastern District of Pennsylvania, recently issued sanctions against a lawyer who relied on generative artificial intelligence to produce fabricated legal citations. The judge introduced the matter with a quote from Karel Čapek’s 1920 science fiction play R.U.R. (Rossum’s Universal Robots): “My dear Miss Glory, the Robots are not people. Mechanically they are more perfect than we are; they have an enormously developed intelligence, but they have no soul.” This instance joins a growing list of cases where legal professionals face repercussions for submitting court filings that contain non-existent case law generated by AI tools.

In this matter, Judge Scott concluded that the attorney, Raja Rajan, essentially “outsourced his job to an algorithm.” Consequently, the court imposed a $2,500 sanction and mandated that the attorney complete a Continuing Legal Education program focusing on artificial intelligence and legal ethics. The basis for the discipline was a violation of Rule 11 of the Federal Rules of Civil Procedure. This rule requires lawyers to certify the factual and legal veracity of their submissions to the court.

The judge emphasized that the duty under Rule 11 is squarely on the attorney, not the technology. The court wrote, “[U]nlike the cases Mr. Rajan cited, Rule 11 is not artificial; it imposes a real duty on lawyers — not on algorithms — to ‘Stop, Think, Investigate and Research’ before filing papers either to initiate a suit or to conduct the litigation,’” citing a 1987 case from the 3rd U.S. Circuit Court of Appeals.

A Pattern of Unverified Submissions

The narrative surrounding this case is becoming increasingly familiar across the legal landscape. After Mr. Rajan filed two motions, the court “was perplexed” to discover that two of the cited cases could not be located using standard legal research tools. Additionally, the lawyer cited two other cases for legal propositions entirely irrelevant to the matters they decided, and two further cases that represented outdated or abrogated law.

When Judge Scott issued an order for Mr. Rajan to show cause why he should not be disciplined, the attorney offered an explanation, stating he “never in [his] wildest dreams” would have predicted that the AI tool would generate fictitious cases. Mr. Rajan informed the court that while he had previously utilized Casetext for reviewing briefs, he used a generative AI tool for the first time in this instance. He asserted that he did not expect the tool to fabricate artificial cases to support his desired outcomes.

The court rejected this explanation, stating, “Far from reasonably inquiring into the legal contentions contained in his briefs, Mr. Rajan blindly trusted an algorithm he had never used before.” Judge Scott noted that the attorney had conducted no research into the AI tool’s efficacy for legal work, its reliability, or, crucially, into the legal validity of the cases it cited.

The court acknowledged the continuous evolution of technology and legal research tools. However, the judge cautioned that “if approached without prudential scrutiny, use of artificial intelligence can turn into outright negligence.” The core of the attorney’s negligence in this case, the judge stressed, was the failure to verify the cited case law.

The ruling made clear that while Rule 11 does not specifically forbid using artificial intelligence for research assistance, the rule absolutely establishes that the signing attorney acts as the ultimate verifier for all legal and factual assertions within their motions. For these reasons, the court ordered Mr. Rajan to pay a $2,500 penalty and complete a one-hour CLE-accredited seminar or educational program related to both artificial intelligence and legal ethics.