On July 29, 2024, the Standing Committee on Ethics and Professional Responsibility of the American Bar Association (“ABA”) published Formal Opinion 512, providing guidance on the ethical use of generative AI tools by legal professionals (the “Opinion”). The Opinion is the latest of several similar ethical guidelines published by various state courts and bar ethics committees, including the September 2023 guidance from the Committee on Professional Responsibility and Conduct for the State Bar of California (“COPRAC”), the January 2024 Florida Bar Ethics Opinion 24-1, the January 2024 New Jersey Supreme Court Notice to the Bar, and the April 2024 Report and Recommendation of the New York State Bar Association Task Force on Artificial Intelligence. We have previously written about the COPRAC guidance and key takeaways for professional firms’ use of AI. To date, no state courts, bar ethics committees, or other advisory bodies on legal practice have chosen to amend or create new ethical rules for generative AI—instead, all such bodies have chosen to extend existing rules to use of this new technology.

Although the Opinion is intended to assist lawyers with upholding their ethical and professional responsibility obligations when using generative AI in legal practice, the guidance is also instructive for the responsible use of generative AI outside of the legal profession.

Six Categories of Ethical Considerations

The Opinion is divided into six parts, tracking different ethical considerations outlined in the ABA Model Rules of Professional Conduct.

  1. Competence

Pursuant to Model Rule 1.1, lawyers must exercise the “legal knowledge, skill, thoroughness and preparation reasonably necessary” for competent representation, as well as to understand “the benefits and risks associated” with technologies used for legal services. Applied to generative AI, the Opinion provides:

  • Lawyers should have a reasonable and current understanding of the specific capabilities and limitations of any generative AI tool that they wish to use. This understanding should account for inherent risks of generative AI, including hallucination, reliability, accuracy, completeness, and bias.
  • Lawyers should not rely on generative AI outputs without appropriate independent verification or review.
  • Although lawyers should independently verify or review generative AI outputs, they need not verify every single output. The appropriate amount of review depends on the specific task and tool used (e.g., it may be appropriate to only review a subset of a large corpus of documents summarized by a tool).
  1. Confidentiality

Model Rule 1.6 requires lawyers to keep all information related to their representation of clients confidential, including by making reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, such information. Model Rules 1.9(c) and 1.18(b) extend similar protections to former and prospective clients. The Opinion recommends:

  • Prior to inputting information related to representation of a client into a generative AI tool, lawyers should consider the likelihood of disclosure or unauthorized access to the information, the sensitivity of the information, the difficulty of implementing safeguards, and the extent to which safeguards would negatively impact the lawyer’s ability to represent the client.
  • Lawyers should be aware that “self-learning” generative AI tools (i.e., generative AI tools that train on input data) create risks that confidential information input into the tool may be disclosed to others and accordingly must obtain informed client consent prior to inputting confidential client information into such tools.
  • Lawyers should understand any applicable terms associated with a generative AI tool prior to its use and consult with relevant technical experts as required.
  1. Communication

Model Rule 1.4 addresses lawyers’ duty to communicate with their clients. Lawyers might be required to disclose use of generative AI to their clients in certain circumstances:

  • Lawyers must disclose their use of generative AI tools if asked by the client how they conducted their work or whether generative AI tools are used by the lawyer.
  • Lawyers must be aware of and comply with any disclosure requirements included in their engagement agreements or in applicable outside counsel guidelines.
  • Lawyers must consult with clients about the use of generative AI, if such use is relevant to the basis or reasonableness of attorney fees.
  • Lawyers must generally consider whether there are other circumstances that warrant client consultation about the use of a generative AI tool, considering factors such as the tool’s importance to a particular task, the significance of that task to the overall representation, how the tool will process the client’s information, and the extent to which knowledge of the lawyer’s use of the tool would affect the client’s evaluation of or confidence in the lawyer’s work.
  1. Candor

If generative AI is used in litigation, lawyers’ ethical obligations also extend to the tribunal adjudicating the dispute. Model Rules 3.1, 3.3 and 8.4(c) on meritorious claims and contentions and candor toward a tribunal may apply:

  • Lawyers must carefully review AI outputs to assure that assertions made to a tribunal using such outputs are not false or misleading and correct any such assertions that may have previously been made.
  • Lawyers should be aware of any applicable local rules for proactive disclosure of use of generative AI and make any necessary disclosures.
  1. Supervisory Responsibilities

Model Rules 5.1 and 5.3 require a law firm’s managerial and supervisory lawyers to create effective measures to ensure that all lawyers and nonlawyer assistants conform to the rules of professional conduct. The Opinion notes that:

  • Managerial lawyers must establish clear policies regarding the law firm’s use of generative AI. Such policies should provide for lawyers and non-lawyers training on generative AI and ethical, practical, and other risks associated with AI use.
  • Lawyers are required to ensure that any work outsourced to third-parties and performed with the assistance of generative AI is consistent with any applicable legal ethical or professional obligations. Existing ABA ethical guidance on the use of third-party service providers and confidentiality, reliability, and security risks likewise apply to the use of generative AI.
  1. Fees

Pursuant to Model Rule 1.5, lawyers must ensure that their fees and expenses are reasonable and that they communicate to their clients about the basis for fees and expenses charged. The Opinion notes that:

  • Generative AI tools may allow lawyers to work more efficiently. Lawyers who bill clients hourly must bill for actual time spent working. Lawyers should also account for efficiencies when charging clients flat fees.
  • Lawyers may bill clients for disbursements but may not bill clients for general overhead expenses. If a tool supports a standard part of the lawyer’s legal practice (e.g., AI tools built into Microsoft Word), it should not be expensed to a client. By contrast, standalone expenses (e.g., generative AI document review tools) may constitute reasonable out-of-pocket expenses that can be charged to the client.
  • Absent an advance agreement, lawyers may only charge clients direct costs associated with the generative AI tool, plus reasonable expenses associated with providing the tool.
  • Finally, lawyers may not charge to clients the time they spend learning about a generative AI tool that they will regularly use in their practice. However, if a client requests that a specific AI tool be used for their representation and the lawyer is not familiar with that tool, it may be appropriate to bill the client time spent learning about how to use the tool effectively.

Takeaways

Professionals considering whether and how to adopt generative AI into their practice should ensure that AI is used safely, appropriately, and efficiently. To effectively manage risks associated with generative AI use, professional firms should consider:

  • Developing Generative AI Policies. Consistent with the guidance in the Opinion, firms should consider creating policies that clearly outline expectations and any prohibitions or limitations on the use of generative AI in professional practice.
  • Providing Training on Generative AI, Policies, and Ethical Obligations. To ensure that firm personnel are adequately informed of the risks associated with the use of generative AI, the firm’s policies on generative AI, and any applicable ethical or professional responsibility obligations, consider providing training for personnel involved in developing, testing, overseeing, or using generative AI tools.
  • Updating Engagement Letters. For professional firms engaging law firms, it may be appropriate to update engagement letters or other contracts governing the provision of legal services to disclose, restrict, or limit the use of generative AI and outline expectations regarding fees for services rendered using the assistance of generative AI. For law firms, consider ensuring that the firm’s generative AI use is consistent with client engagement letters.
  • Enhancing Third-Party Risk Management. To manage risks associated with the use of generative AI by third-party service providers, including risks that flow from the firm’s own ethical or professional responsibility obligations, consider implementing enhanced diligence, contract negotiation (covering disclosure and any limitations on the use of generative AI in providing the relevant service), and monitoring to ensure that providers’ use of generative AI is consistent with firm expectations.
  • Developing Billing Models. To account for the work done using generative AI, firms should consider whether to continue charging on an hourly basis or to use an alternative fee arrangement.

For legal and other professionals implementing AI risk management frameworks, one core challenge is ensuring continued vigilance as the technology continues to evolve and conventional expectations regarding the use of generative AI change over time. As the Opinion recognizes, generative AI is a “rapidly moving target” that “will continue to change in ways that may be difficult or impossible to anticipate.” To manage this particular risk, lawyers should consider:

  • Reviewing Existing Guidance. The Opinion interprets existing professional rules and applies those rules to the use of generative AI in the legal profession. Firms and in-house legal departments should consider identifying and remediating any gaps between existing guidance and current practices with respect to generative AI.
  • Staying Current and Anticipating New Guidance. The Opinion also previews that the ABA and other state and local legal advisory bodies will likely need to update their guidance on the use of generative AI as the technology evolves. Other legal, regulatory, and governance bodies will likely take a similar approach. Professional firms and in-house legal departments should closely track any applicable new guidance and adapt their practices as appropriate.
  • Allocating Resources for AI Risk Management. Assess whether any additional resources (personnel or budget) will be needed to assist the firm or in-house legal department with timely tracking and mitigating emerging risks associated with the use of generative AI in professional practice.

To subscribe to the Data Blog, please click here.

The Debevoise AI Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

The cover art used in this blog post was generated by Google Gemini.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Erez is a litigation partner and a member of the Debevoise Data Strategy & Security Group. His practice focuses on advising major businesses on a wide range of complex, high-impact cyber-incident response matters and on data-related regulatory requirements. Erez can be reached at eliebermann@debevoise.com

Author

Michelle Huang is an associate in the Litigation Department.

Author

Josh Goland is a law clerk in the Litigation Department.