New Grads, ChatGPT and the Legal Dangers They Pose

Cannabis Law Special Issue

A new crop of law grads have entered the workforce, and if 2022 is any indication, they’re eager to settle into positions. According to the American Bar Association (ABA), just 10 months after graduation, nearly 85% of that class (30,512) were already employed in “full-time, long-term Bar Passage Required or J.D. Advantage jobs.”

This generation of recruits is particularly tech-savvy, too. As children, they used FaceTime and instant messaging as naturally as their parents once picked up a pencil or phone. And with more than half of all Americans being millennials or younger, they bring valuable digital skills to the table that can influence a firm’s capabilities in areas from communications to delivering the latest in customer experiences.

Further, they have likely used generative AI tools like ChatGPT, but that can be a double-edged sword. On one hand, legal leaders have an opportunity to acquire talent ready-made for these powerful, time and cost-saving tools. On the other, if not used properly, their usage of ChatGPT on the job could expose a law firm itself to serious legal dangers.

Personal Injury Summit

What’s the risk of ChatGPT?

Make no mistake, generative AI is here to stay and will have a major impact on the future of all industries. But it’s immature, just now taking its first steps, and as with any disruptive technology, there are gaps between its capabilities and how it should be used. This is especially true in the legal realm.

Already our space has produced embarrassing ChatGPT gaffes. A Manhattan judge fined two lawyers for giving him a legal brief full of fictional cases and citations, courtesy of ChatGPT. He further ordered the attorneys to send copies of his opinions to the very real judges whose names made it into the filing. We’ve also caught a glimpse of what future cases will hold. The first major defamation challenge has been filed against ChatGPT’s parent company, Open AI. In it, a radio host claims it produced a false legal complaint accusing him of defrauding and embezzling funds from a nonprofit.

It doesn’t take a lot of imagination to see the tremendous legal and reputational repercussions looming on the horizon for law firms that don’t keep tabs on the use of ChatGPT. And for new recruits joining – ones who may have used the technology in studies and are anxious to prove themselves in their work – it could be hard to resist. Law firms, too, will be tempted by the cost-efficiency gains that come when research and the drafting of materials are completed in clicks.

Where’s the Problem?

When you boil it down, the problem with ChatGPT is the potential for ethical violations. This includes the following:

  • Breach of attorney-client privilege: Any information put into ChatGPT becomes part of its database, which is open to the public. If a new attorney unknowingly enters unredacted and privileged details into ChatGPT for something like research or writing a brief, it could end up being used to answer someone else’s query.
  • Intellectual property infringement: ChatGPT draws information from the Internet, and by doing so, an attorney could replicate someone else’s thoughts and infringe upon their intellectual property. Tools like CopyScape may help identify plagiarism, but concept theft is far harder to catch in review.
  • Improper handling of information: The public Internet holds inaccurate, offensive and personal data. These materials can make their way into the data sets training generative models, and unchecked may eventually appear in legal documents and materials where firms could be held liable.

PPC for Legal

We’re seeing the issues ChatGPT causes when its output goes unchecked or is reviewed by less experienced legal eyes. This will decrease over time as instances like the Manhattan case illustrate how not to use generative AI.

Can You Trust It?

If law firms want technology like ChatGPT to work well, they need to focus on feeding it healthy, relevant information. The more quality data firms can use to train and guide their own AI models, the better the systems will perform. Regardless, the ethical duty to protect confidentiality can never be ignored and complacency is the biggest problem.

There’s a black box element to generative AI tools. They may produce results, but there’s no way to completely explain how the conclusions were reached – and blind trust is not advised. Though it’s unclear whether a leak of privileged information could be traced back to an attorney or a practice, even the slightest chance is terrifying. And even if it is nontrackable, information is privileged for good reason, and improperly protecting it raises serious ethical concerns.

As firms consider ways to harness generative AI, they should train tools on their own databases. Even so, while it may be easy to think information staying within a firm is suitable to share with all staffers, some privileged info needs to be sequestered even internally. Firms need to figure out how to ensure data remains within its intended use and this applies to everyone, particularly those holding a Juris Doctor with ink that’s not yet dry.

How Do You Control ChatGPT?

Recent grads need to tread lightly when using ChatGPT at their new jobs. Law firms must also have protocols in place to guide them and ensure they’re following best practices. This must be vigorously enforced. So, how can managers protect their teams from potential vulnerabilities?

The best way to learn something is through experimentation. Despite the risks, you should want your people using generative language tools to experiment and build familiarity – just not with privileged information. Education is important, and as teams better understand the technology, its strengths and weaknesses become more apparent. If the Manhattan attorneys had a more complete understanding of ChatGPT, they would never have turned in their brief blindly. Simply put, using ChatGPT for legal research is not safe without thorough verification.

Insist staff stay clear of free or open source technologies. Make sure they only use databases the firm knows, trusts and pays for, such as Westlaw, LexisNexis and Fastcase. To supervise employees, there need to be checks and balances in place and a way to “see under the hood.” Mid-level and senior managers must be in sync, cross-checking and auditing employee usage. If a teacher can spot a teenager who has turned in ChatGPT-inspired homework, you can bet someone will notice a law firm doing the same. This is especially important when overseeing younger generations who may be inclined to push the envelope.

Successfully integrating new grads into multi-generational teams is crucial, too. New recruits can be a huge asset when there are proper controls in place because they are less timid about leveraging the technology, while older generations may have to be pushed and prodded. Consider creating a task force to study these tools and be sure to involve new hires. Not only is their perspective unique, inclusion in these projects will help validate their ambition and is a great way to open the broader organization’s eyes to the possibilities of generative AI. Once the task force study is complete, the senior team can then decide on a path forward and develop best practices accordingly.

Is ChatGPT Worth It?

When teams are fully conversant in the technology, and usage is controlled, the efficiency gains of a tool like ChatGPT will provide a true competitive edge in the marketplace. In some ways, it’s similar to the earlier days of the Internet and the resistance many lawyers had to using it for marketing themselves. Those that took the leap may not have had perfect results, but they learned along the way, enjoyed much greater success, and put a lot of distance between themselves and those firms that sat on their hands.

ChatGPT can automate the drafting process, analyze legal documents and provide language for such things as briefs, contracts and other materials. When used with chatbots, it can answer questions and bolster customer support. All of this can save law firms a lot of time, which translates into huge cost savings and a better ability to maximize staff. Further, fresh grads can leverage newer communications channels to reach younger generations of consumers where they are and help a firm develop the digital experiences they’ve come to expect.

The remainder of 2023 will be all about discovering what generative AI efficiencies justify investment, and which features don’t deliver value. There will also be an emphasis on continuing to understand what level of human intervention is necessary and how hands-on generative language managers need to be. Fact-checking, editing, heading off plagiarism – the list of potential dangers is long and human intervention is imperative.

All that said, generative AI tools are still very much worth the risk and will play a major role in the future, just as younger generations of law grads will. The trick is understanding the technology, controlling usage, leveraging its attributes and setting up the tech-savvy new graduates to responsibly use it to innovate.

Seth Price

Seth Price is the founder of Price Benowitz, a law firm he grew to 40 attorneys in a decade. He achieved a perfect 10/10 on the attorney rating site AVVO. He was designated a Thomson Reuters Super Lawyer and Top 100 Trial Lawyer by the National Trial Lawyers Association. Price also created the pioneering marketing agency, BluShark Digital.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts