Navigating Ethical and Regulatory Challenges of Generative AI in Law Firms

Judge Dan Hinde

The legal profession is undergoing a profound transformation as technology advances, with Artificial Intelligence (AI) emerging as a central player. Generative AI, in particular, is at the forefront of this change, promising to enhance efficiency and streamline various legal processes. However, the integration of AI into legal practice presents both opportunities and challenges. Law firms that do not adapt to this technological shift may face a competitive disadvantage, but adopting generative AI must be managed carefully to mitigate potential risks.

ETHICAL RISKS POSED BY AI

As generative AI becomes more prevalent, law firms are increasingly confronted with the need to understand and regulate its use. Clients are asking how AI can reduce legal expenses, making it imperative for law firms to embrace this technology while managing its application responsibly. Developing a robust policy on the acceptable use of generative AI is therefore essential to safeguard both the law firm and its clients.

Advertisement

Answering Legal Banner

STEP 1: KNOW THE TECHNOLOGY

Generative AI is a subset of AI that creates new content—whether text, images, audio or other media—based on patterns learned from existing data. Unlike traditional AI models that analyze data, generative AI produces original outputs that mimic or extend the characteristics of the input data. Popular products like ChatGPT, Bing, and Bard are designed for general use, but there are also AI tools tailored specifically for legal practice, such as those developed by Westlaw, Lexis Nexis, and Harvey AI.

STEP 2: KNOW THE REGULATION AND THE RISKS

Regulations

The regulatory landscape for AI in the legal field is evolving rapidly. In August 2023, the American Bar Association (ABA) established a Task Force on Law & Artificial Intelligence to explore the ethical implications and potential risks of AI in legal practice. This task force examines issues such as bias, cybersecurity, privacy, and the impact of AI on access to justice and legal education.

On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued a formal opinion on Generative AI Tools. This opinion focuses on the duties of Competence (Model Rule 1.1), Confidentiality (Model Rule 1.6), Communication (Model Rule 1.4), Candor Towards the Tribunal (Model Rules 3.1, 3.3, and 8.4(c)), Supervisory Responsibilities (Model Rules 5.1 and 5.3), and Fees (Model Rule 1.5).

Advertisement

Eza Mediation

State bars are also addressing the ethical use of AI. California issued an ethics advisory opinion in November 2023, advising attorneys to disclose the use of generative AI to clients and to ensure AI-generated work does not result in inflated billing. Florida followed with similar guidance, emphasizing transparency and reasonable billing practices. Other states, including New Jersey and Michigan, are expected to issue their own advisory opinions soon.

Federal courts are also addressing this issue. The Fifth Circuit Court of Appeals introduced Rule 32.2, requiring certification that AI-generated content has been reviewed for accuracy. The Eastern District of Texas has adopted a rule requiring attorneys to verify AI-generated content and to maintain independent legal judgment.

Ethical Risks

The ethical implications of using generative AI are substantial. Key considerations include:

Competence (Rule 1.1): Lawyers must use technology competently, staying informed about its benefits and limitations.

Client Communication (Rule 1.4): Attorneys should inform clients about the use of AI and its implications.

Reasonable Fees (Rule 1.5): Firms must ensure billing practices reflect efficiency gains from using AI and do not misrepresent time spent.

Confidentiality (Rule 1.6): Client information must be protected, and AI tools should not lead to unauthorized disclosures.

Candor to the Court (Rules 3.1, 3.3, and 4.1): AI-generated content must be accurate, avoiding false statements or misrepresentations.

Supervision (Rules 5.1 and 5.3): Firms must implement policies to ensure AI use complies with ethical standards and that all staff adhere to these guidelines.

Malpractice Implications

The potential for malpractice claims increases with the use of generative AI. One notable risk is the phenomenon of “hallucinations,” where AI generates plausible-sounding but inaccurate or fabricated information. This can include incorrect citations or fictitious case law, leading to serious consequences, including sanctions or disciplinary action. The risk is heightened when attorneys use AI in unfamiliar areas of law. Rigorous peer review and fact-checking are essential to mitigate these risks.

OTHER CONSIDERATIONS

Clients often have specific guidelines regarding the use of AI by outside counsel. Law firms must be aware of these guidelines and ensure compliance to avoid issues such as unauthorized AI use or billing discrepancies.

PREPARING AN ACCEPTABLE USE POLICY FOR GENERATIVE AI

To manage the risks associated with generative AI, law firms should develop a comprehensive acceptable use policy. This policy should outline how AI tools can be used within the firm and establish protocols to address ethical, regulatory, and malpractice risks.

Define Acceptable Use: The policy should specify which AI products are permitted, distinguishing between

general-purpose tools and those designed specifically for legal practice. It should also address whether non-legal AI tools are allowed.

Safeguard Ethical Standards: The policy must ensure AI is not used as a substitute for legal judgment. Attorneys should obtain client consent before using AI and ensure compliance with outside counsel guidelines. Confidentiality must be maintained, and all AI-generated content should be verified for accuracy.

Maintain Transparency: Firms should keep detailed records of AI usage, including the data ingested and any edits made to AI-generated content. Time entries for AI-assisted work should be accurate and reflective of the actual time spent.

Training and Compliance: Once the policy is established, it is crucial to train all attorneys and staff on its details and monitor compliance. Regular audits should be conducted to ensure adherence to the policy and to address any issues that arise.

CONCLUSION

AI is an integral part of the future of the legal industry. Law firms must embrace this technology while implementing rigorous policies to mitigate ethical and regulatory risks. Generative AI offers significant benefits but should complement, rather than replace, human expertise. By balancing innovation with careful oversight, law firms can harness the advantages of AI while maintaining the integrity of their practice and protecting their clients.

Bryce Riddle and Aram Desteian

Bryce Riddle, a shareholder and litigator at Bassford Remele, co-chairs its data privacy and cybersecurity practice group. He practices complex commercial litigation, data pri-vacy and cybersecurity, and employment litigation. Bryce also has class action experience in data breach and consumer privacy litigation. [email protected], 612.376.1624. Aram Desteian is a shareholder with Bassford Remele and serves as its general counsel. He represents businesses in complex commercial litigation and counsels and defends lawyers against professional liability claims, including lawsuits, investigations, board pro-ceedings, and hearings involving legal ethics and legal malpractice. Aram is chair of the MSBA professionalism and ethics section. [email protected], 612.746.1088.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts