The Ethical Ambiguity of AI in Legal Practice

Best Marketing Companies

H.G. Wells once wrote, “Adapt or perish, now as ever, is nature’s inexorable imperative.” This statement rings especially true for the legal field right now. AI is here to stay, and successful law firms will find a role for it in their practice. The problem is there are no established rules. 

We are all figuring out what AI can and should do. As this technology burst on the scene, several lawyers put too much faith in the nascent technology, relying on algorithms for citations in legal briefs only to submit the document and learn the cases were made up. Despite these setbacks, we cannot let fear and skepticism deter us from using AI altogether. We must critically consider its role in our practice. 

Advertisement

Answering Legal Banner

It’s clear AI is not a senior lawyer. In their current state, these technologies are more like summer associates, meaning we must second-guess their work and closely supervise their activities. You wouldn’t put a summer associate in charge of building landmark cases. Instead, you’d relegate them to conducting due diligence and reviewing documents. We would be wise to keep this analogy in mind as we explore the technology. 

While awaiting concrete expectations and guidance from bar associations, legal professionals must understand how GenAI works and its potential pitfalls to make informed decisions on its use. 

What is GenAI?

Most of us have heard a lot about generative artificial intelligence (GenAI) in the news, but we may not understand the nuances of these technologies. Our field is already very familiar with extractive AI, which retrieves specific information from existing resources. We already use this for e-discovery and legal research, among other applications. 

Advertisement

Eza Mediation

GenAI creates entirely new content. Many GenAI systems are built on large language models (LLMs), which train on vast amounts of data and learn to identify patterns, structures and features within the data. From there, GenAI creates new content similar to the original information. For example, GenAI can write a new client email based on the millions of emails it has already read, essentially predicting what words someone might write. 

Not all GenAI solutions are the same. General-purpose models, like GPT 4o (the LLM used by ChatGPT), use the entire internet as their training data. These tools have many applications but lack the nuance required for technical tasks like contract drafting. 

LLMs can also be trained on tailored, subject-matter-specific data for greater functionality within specialized fields. AI models with legal databases are more equipped than general LLMs to handle the complexities of legal work, but they are not foolproof. 

What are GenAI’s pitfalls?

Regardless of which tool you choose, you can’t use GenAI with reckless abandon. It has several flaws that you must be conscious of.

Hallucinations

GenAI sometimes just makes things up. Many lawyers have faced sanctions and suspensions for using GenAI to draft documents and not double-checking the information. While legal tech solutions perform better than general LLMs for legal work, research shows they still hallucinate on 1 in 6 queries. You must always check the tool’s work.

Confidentiality breaches

Many GenAI algorithms learn from the data you enter. If you put confidential client information into a general-purpose AI, it effectively becomes public knowledge, which is a big problem that can result in sanctions. Legal teams must carefully consider how a solution uses their data.

Bias

AI models can perpetuate existing biases in the training data. Legal professionals must understand these risks, take steps to prevent bias and consistently monitor a platform’s output.

What are the rules around GenAI?

GenAI is evolving so fast that bar associations are scrambling to keep up. Right now, there are no hard and fast rules on using AI.

The American Bar Association released its first guidance in July, stating, “Lawyers using generative artificial intelligence tools must fully consider their applicable ethical obligations, including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal, and to charge reasonable fees.”

The opinion offers baseline suggestions rather than concrete rules. It calls GenAI a “rapidly moving target” and states that updated guidance will come as technology changes. Several bar associations in larger states have released their own guidelines, but the majority have yet to take firm action or issue clearly defined rules. It may take active pushing from bar members to prompt AI additional guidance and education. 

While formal regulations are likely years away, state and local bar associations should take the initiative to broadcast the ABA’s opinion and educate their members on the ethical use of GenAI. The next wave of AI guidance should include more explicit recommendations, such as specific language for client discussions about AI and how to bill for the use of AI.

What can law firms do to use GenAI responsibly?

Before committing to AI, you must fully understand your ethical obligations and responsibilities and how the AI platform aligns with those expectations. You already supervise your staff to ensure they follow rules and uphold standards; you should do the same for any software solution. Collaboration between AI systems and human lawyers is essential to maximize AI’s potential while minimizing the risks. 

Consider using a legal-specific platform, which will have the proper guardrails in place to ensure data confidentiality, and which is more likely to generate appropriate responses than a general-purpose AI model. When evaluating potential solutions, ask questions about the software’s data policies before you commit to anything. 

Many legal tech solutions use retrieval-augmented generation (RAG), which gathers information from a specific knowledge base to reduce the risk of hallucinations. While this is an improvement on basic LLMs, it is not infallible. 

Double-checking GenAI’s work seems like a big commitment. However, you can streamline this process by using legal tech that cites its sources. With links to specific documents, you can easily click through and verify the information included in the draft (such as any cited precedents). 

Responsible AI use also involves staying on top of technology trends and industry guidance as they will impact AI’s application in legal practice. You must continuously learn, experiment and adapt your strategy to align with best practices, ethical obligations and any new regulations. Education may include discussions with others in the legal profession, reading opinions by bar associations and taking CLE classes.

AI supports non-billable tasks, too

While using AI for legal work is being hotly debated, AI can be safely and ethically used to execute non-billable tasks, reducing administrative burdens and increasing profitability. Applications for consideration include:

  • Billing
  • Timekeeping 
  • Financial management
  • Client intake and onboarding
  • Document management
  • Client communication
  • Marketing and business development
  • General organization

By automating repetitive tasks like scheduling, research, document handling, and invoices, lawyers gain more time for strategic work and relationship building. 

Do your due diligence

You have many factors to consider when adopting and implementing AI in your legal practice. As legal professionals, we tend to exercise an abundance of caution, and there’s nothing wrong with that.

Many lawyers, especially at small firms, don’t have time during the work week to experiment with technology. Your foray into GenAI doesn’t have to start at the office. Try using widely available GenAI tools at home. Experiment with having an AI organize your schedule, plan your meals or write bedtime stories. Seeing its applications in your personal life can give you ideas about its value in your legal practice. 

Given the lack of rules around AI, we are responsible for understanding and vetting new technology before implementing it in our workflows. However, letting fear and skepticism deter us from using AI will eventually backfire as other firms adopt and thrive with this new tool. Keeping an open mind and discerning AI’s value, shortcomings and ethical considerations are critical to keeping your law firm competitive. 

Jordan Turk

Jordan Turk is a practicing family law attorney in Texas and the Legal Technology Advisor at Smokeball. In addition to her law practice, she’s passionate about legal technology and how it can revolutionize law firms. After almost four years of practice with a high-asset family law firm in Houston (and after being frustrated at the lack of automation in her firm), she discovered the world of legal technology, which ultimately brought her to Smokeball. In her podcast, Hacking Law Firm Success with Jordan Turk, she interviews law firm founders about how they grew and scaled their practices, as well as their ethos behind managing a firm.

Comments 1

  1. Mike P says:

    Thanks for your helpful article!

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts