A friend alerted me to this recent Report and Recommendation in a case involving a request to audit books and records under the Employee Retirement Income Security Act of 1974, as amended (commonly known as ERISA). The Report and Recommendation relates to the inclusion of citations to nonexistent cases in court filings made by a solo practitioner, Rafael Ramirez. I find the court’s narrative, reasoning, and recommendation illuminating in a sobering sort of way. As many of us feel our way through how to best guide our students in using generative artificial intelligence in their legal work, the Report and Recommendation offers for for thought.

To start, I was surprised by the explanation offered by Mr. Ramirez in response to the court’s order to show cause why he should not be sanctioned for violating Federal Rule of Civil Procedure 11(b). In that regard, the court represented that

Mr. Ramirez admitted that he had relied on programs utilizing generative artificial intelligence (“AI”) to draft the briefs. Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations.

Is it possible that legal counsel today–especially legal counsel using generative AI in their work–would not know that citations to legal authority generated through AI can be fake or faulty? Regardless of the answer to that question, we should ensure that our students all understand generative AI’s capacity to create falsehoods.

The Report and Recommendation aptly noted that “[c]ourts have consistently held that failing to check the treatment and soundness—let alone the existence—of a case warrants sanctions” and observed that {t]he arrival of modern legal research tools implementing features such as Westlaw’s KeyCite and Lexis’s Shephardization has enabled attorneys to easily fulfill this basic duty,” adding that, as a result, “[t]here is simply no reason for an attorney to fail to fulfill this obligation.”

The court’s observations seem unassailable. The bar should understand the potential perils of generative AI usage. And our students should understand them, too, and also should recognize that AI is not a substitute for the important work of cite-checking.

Ultimately, the court concluded that “[i]t is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law” and adds that “[h]ad he expended even minimal effort to do so, he would have discovered that the AI generated cases do not exist.” The court sanctioned Mr. Ramirez $ 5,000 for each of the hallucinated case citations the court had identified in Mr. Ramirez’s work (three briefs in total), for an aggregate of $15,000. While this may seem like a relatively small financial penalty, the court established that it is on the high end of the scale based on a review of earlier sanctions for similar misconduct.

Nevertheless, the monetary sanctions are just part of what Mr. Ramirez is facing. The court also found Mr. Ramirez to be in violation of applicable rules of professional conduct in three areas: competence; meritorious claims & contentions; and candor toward the tribunal. As a result, the court referred “the matter of Mr. Ramirez’s misconduct in this case to the Chief Judge pursuant to Local Rule of Disciplinary Enforcement 2(a) for consideration of any further discipline that may be appropriate.”

The potential combination of legal and professional censure for misconduct of this kind should convey meaning to business law students. The rules and processes relating to each system of enforcement are different. They are significant. They have relational and reputational effects. I plan to share the court’s Report and Recommendation–and this blog post–with my students to help ensure their knowledge of issues at the intersection of AI and professional responsibility is as comprehensive as possible.