
The use of artificial intelligence (AI) has become prevalent in nearly every sector of society, and the practice of law is no exception. However, as with other technological innovations, the rapid development and refinement of AI have complicated, or at least delayed, the adoption of adequate regulation for its use by attorneys.
Various incidents, many widely known, have arisen from the intentional or inadvertent and, in any case, improper use of artificial intelligence by legal professionals, some with serious consequences. Various institutions have adopted recommend guidelines for responsible AI use in the legal profession, with more regulatory developments expected in the future.
Some rules regarding AI use have been adopted by judges and court personnel, but none yet regulate the use of AI by attorneys.
In July 2025, the California Judicial Council adopted a rule on the use of artificial intelligence by judges and court staff. Specifically, subsection (b) of Rule 10.430 of the California Rules of Court establishes that:
“Any court that does not prohibit the use of generative AI by court staff or judicial officers must adopt a generative AI use policy by December 15, 2025. This rule applies to the superior courts, the Courts of Appeal, and the Supreme Court.”
This rule, which went into effect September 1, 2025, allows California courts to prohibit AI use by judges and staff. However, if they do not prohibit it, those courts must develop and implement an internal policy to regulate its use. Generally, such policy must:
On the other hand, there is only one existing guideline applicable to all members of the legal profession. The Practical Guide for the Use of Generative Artificial Intelligence in the Practice of Law, published by the California State Bar in November 2023, is the only general directive applicable to all practicing attorneys in California.
It is important to note that the guide is not binding. It merely offers recommendations deemed “best practices.” Its guidelines on AI use align with obligations already imposed by state laws and professional conduct rules governing the practice of law, such as the California Business and Professions Code and the Rules of Professional Conduct. These include duties of confidentiality between attorney and client (Bus. & Prof. Code §6068; Rules 1.6, 1.8.2), competence and diligence (Rules 1.1, 1.3), non-discrimination (Rule 8.4.1), truthfulness in statements and conduct (Rules 3.1, 3.3), and supervision of subordinates (Rules 5.1, 5.2, 5.3).
In other words, the guide serves to remind attorneys of existing ethical duties and their application to the modern use of artificial intelligence.
Although it is foreseeable that specific laws or rules regulating AI use by attorneys will eventually be enacted in California, such provisions do not yet exist.
California’s legislative efforts to regulate AI more broadly have focused on the proposed Transparency in Frontier Artificial Intelligence Act. Though pending Governor Gavin Newsom’s approval, the proposal seeks to regulate developers of advanced AI models (such as ChatGPT, Claude, and Gemini). It would require them to establish safety protocols, report significant incidents, ensure transparency, and protect whistleblowers.
The number of cases in which U.S. courts have sanctioned attorneys for improper AI use has increased dramatically. In 2025 alone, federal courts in Alabama, Colorado, New Jersey, Texas, and Wyoming, among others, imposed sanctions on lawyers who submitted filings containing fictitious case citations generated by AI. Sanctions have included monetary fines, disqualification from ongoing cases, and additional disciplinary measures.
A notable example is Wadsworth v. Walmart, Inc., where a federal court in Wyoming sanctioned three lawyers from one of the 50 largest U.S. firms after they filed a 100-page brief containing eight fake citations produced by AI. The attorneys used ChatGPT to prepare the document, which they signed and filed without verifying the citations’ authenticity.
The judge found this conduct to violate Rule 11 of the Federal Rules of Civil Procedure, which requires attorneys to certify that all filings are based on real facts and legal authorities. Consequently, the court publicly reprimanded the attorneys, ordered them to pay opposing counsel’s fees related to reviewing the false citations, and required them to report the sanction to their respective state bar associations.
In September 2025, the California Court of Appeal in Noland v. Land of the Free, L.P., sanctioned an appellant’s attorney for submitting appellate briefs containing multiple fabricated citations generated by AI, including non-existent precedents or ones that did not say what the lawyer claimed. The court fined the lawyer $10,000 and ordered notification of both the client and the California State Bar.
Recent experiences show that AI can be a valuable tool in legal practice but also a serious risk when it replaces rather than complements the lawyer’s professional judgment. Sanctions across U.S. courts reveal that failure to verify and supervise AI output undermines not only the lawyer’s credibility but also public trust in the judicial system.
In California, the State Bar’s guidelines and the Judicial Council’s new rule represent early progress. Still, the real challenge lies in ensuring that each attorney integrates these technologies with discernment, rigor, and responsibility so that technological innovation strengthens, rather than erodes, the essential ethical role of the legal profession.
Patrick Ross, Senior Manager of Marketing & Communications
EmailP: 619.906.5740
Suzie Jayyusi, Events Planner
EmailP: 619.525.3818