CereBel Files Amicus Curiae Brief to Provide Perspective on Beneficial Use of Generative AI
Concerned by the publicity generated by the sanctions hearing in the matter of attorneys who filed a brief containing hallucinated citations and a possible expansive reaction by the court that might condemn all Generative AI technology for the misuse by some practitioners, CereBel filed a Proposed Amicus Curiae Brief to Support the Courts Consideration of Sanctions. The Brief also addresses two standing orders in other districts related to the use of the technology which were prompted by the matter (“Mandatory Certification Regarding Generative Artificial Intelligence”, Hon. B. Starr (N.D. Tex.) and “Standing Order for Civil Cases Before Magistrate Judge Fuentes,” N.D. Ill., May 31, 2023.)
The full Brief:
Summary of Amicus Curiae Brief auto-generated by CereBel's Intelligence Platform:Introduction
CereBel Legal Intelligence, submits this Amicus Curiae Brief in support of the Court's consideration of sanctions against attorneys Peter LoDuca, Steven A. Schwartz, and law firm Levidow, Levidow & Oberman, P.C. under Rule 11(b)(2) & (c) and 28 U.S.C. § 1927. The Subject Attorneys are accused of citing non-existent cases and submitting non-existent judicial opinions due to their use of the generative artificial intelligence software application ChatGPT, developed by OpenAI. The Court's decision on whether the Subject Attorneys' use of GenAI software is typical and innocent or careless and irresponsible, and the degree of Rule 11 sanctions issued, could potentially impact the development and adoption of these technologies by legal practitioners. Miscalibrated sanctions could negatively affect the Legal AI software industry, depriving practitioners and their clients of potential productivity gains.
Interests of the Amicus Curiae
- The Amicus has a direct interest in the software attributed with error as operator of services which apply Generative AI to legal practice. The outcome of the sanctions could impact the Amicus's business and the wider market for legal AI software.
- The Amicus possesses expertise and knowledge in Generative AI and its use in legal research that can provide timely and valuable information to the Court in evaluating the reasonableness of the attorneys' conduct and the potential impact of sanctions.
- Generative artificial intelligence technologies, including large language models (LLMs) like ChatGPT, have been developed by major companies such as Microsoft, Google, Meta, NVIDIA, and Amazon. These technologies have the potential to revolutionize the legal profession and ChatGPT has been widely adopted since its release in November 2022.
- LLMs like ChatGPT are trained using a two-step process, involving exposure to diverse text corpora and supervised fine-tuning. ChatGPT's development and iterative enhancements have resulted in increased capabilities, but also have led to safety messages to caution users about the limitations of the software.
- While LLMs offer many benefits to the legal community, including time-saving efficiencies and enhanced accuracy, they also have shortcomings, including the potential for generating fictitious content, biases, and errant reasoning. Software developers are continuously working to improve their models and address these issues.
A. Colorable Reasonableness of Attorney's Belief in Legal Research
- The attorneys' belief that citations and judicial opinions generated by ChatGPT were bona fide may not have been unreasonable at the time of consultation. ChatGPT gained widespread use and positive media coverage after its release, and the attorneys' use of the software predates many safety enhancements implemented later.
- Discrepancies in submitted chat transcripts may be attributed to the use of different language models, and the Court should give higher weight to the complete transcripts rather than partial screenshots.
- The use, evaluation, and perfection of chat transcripts as evidence raise novel issues and should be carefully evaluated. The Amicus provides guidance on generating evidence from LLMs and highlights the importance of complete transcripts and relevant parameters.
B. Sanctions Must Focus on Conduct, Not Enabling Tools
- The Court should consider the potential impact of sanctions on the bar and legal practitioners' use of similar technologies. Harsh sanctions could dissuade attorneys from adopting innovative tools that enhance the quality and efficiency of legal services.
- The standing orders of other district courts, such as the N.D. Tex. and N.D. Ill., demonstrate the need for caution to avoid forming harmful precedents. Opt-in certification requirements may impede access to justice and inhibit innovation without fully considering the interests of all stakeholders.
- Judicial rules and sanctions should not preempt legislative action or hastily establish new laws. The United States Senate and the House of Representatives have shown an interest in regulating artificial intelligence, and a deliberative process should be given time to address the complex issues surrounding AI's use in the legal profession.
C. Discipline Conduct, Not Tools
- The competence and ethical duties of attorneys remain paramount regardless of the tools they use. Legal professionals are responsible for the work they provide to clients and courts, and technology does not absolve them of professional or ethical duties.
- Professional conduct rules have evolved to address the challenges posed by technology in the legal profession. Attorneys are expected to maintain competence in the use of technology, including Generative AI, and should adhere to all ethical obligations.
The Court should consider the potential impact of sanctions on the legal AI software market and the wider judiciary's perception of Generative AI. The Amicus provides valuable information and recommends caution in issuing sanctions that could impede progress in the field. The Court should focus on attorney conduct rather than the tools they use and avoid preempting legislative action or making new laws in haste.