Artificial Intelligence and the Practice of Law: Enormous Potential, But Extreme Caution Is Advised

Chat GPT launched on November 30, 2022. A mere two months later, it had an estimated 100 million users. Four months later, it had one billion users. This overnight success has sparked tremendous excitement about the potential of artificial intelligence and chatbots like ChatGPT. As promising as the technology is, it important to understand just what it can and cannot do today.

AI Tools

Three general-purpose AI tools are ChatGPT 3.5 (free); ChatGPT 4.0 ($20/month subscription); and Bing Chat (free).

Advertisement

PPC for Legal

ChatGPT 3.5 is what started the surge in excitement about AI, and it is the most widely used of these three options. You can give simple commands to ChatGPT, and you will get responses that may or may not be useful. You can then respond to the result to refine ChatGPT’s response. In general, the more explicit you are in what you are looking for, the more likely you will get something closer to what you are looking for.

ChatGPT (both free and paid versions) logs conversations, including any personal information that you share, and may use this as training data. This makes version 3.5 inappropriate for use in legal work for clients.

The paid version of ChatGPT 4.0 provides an “incognito mode” where you can optionally turn-off “chat history and training.” While this is not a perfect solution, if you are careful with the use of this setting, this version could potentially be more appropriate for legal work than the free version.

Advertisement

Dram Shop Experts

In early 2023, Microsoft invested $10 billion in OpenAI, the company that created ChatGPT. Shortly thereafter, they released Bing Chat, a chatbot based on ChatGPT 4 that is integrated into the Bing search engine. By combining the power of ChatGPT 4 with real-time access to the internet, Bing Chat can provide some of the most impressive artificial intelligence results today and Microsoft indicates that they will keep your interactions with the technology private.

ChatGPT Gone Wrong

An egregious example of the limitations of ChatGPT is the story of how New York City lawyer, Steven A. Schwartz, was sanctioned after it was discovered that several cases cited in a recent filing did not exist. His explanation for the fictional case law citations was that he used ChatGPT to “supplement the legal research.” ChatGPT manufactured great sounding case law to support the legal arguments and Mr. Schwartz apparently neglected to verify the facts of the cases with more trusted sources. He says he asked the chatbot to confirm if the cases were real, was assured by the bot that they were, and then presented the chatbot-generated fiction as fact to the court. Mr. Schwartz later stated that the chatbot “revealed itself to be unreliable.”

Hallucinations

At the risk of understating the power of the technology, ChatGPT technology is effectively “autocomplete on steroids.” But, instead of attempting to complete your sentences, it will use the data it’s been trained on to respond to your questions or requests. These responses are often impressive and helpful.

While ChatGPT can do wonderful things, it can also lie to you with incredible confidence. When AI lies to you, this is called a “hallucination.” Hallucinations are a common issue with these new chatbots where they make things up and confidently present this information as fact. Fighting hallucinations is an area of substantial active research. Always independently verify anything that an AI tells you.

Advertisement

Computer Forensics

If you have not tried ChatGPT and Bing Chat, I encourage you to try them. But a cautionary note – do not use them to create pleadings without verifying facts with known reliable sources. There are free and paid for options from OpenAI and Microsoft, but no version which is yet hallucination-free.

Bing Chat Knows More than OpenAI’s ChatGPT

Artificial intelligence creates unpredictable results. Early results were horrific, including racist, sexist and other bad replies.

OpenAI responded with teams of people reviewing ChatGPT generated content for over a year eliminating unacceptable results before opening access to the public. They simplified the horrible results problem by performing its base training of both 3.5 and 4 on a “frozen” data set from September 2021. This means that OpenAI’s ChatGPT is currently unaware of anything from the past two years. Bing Chat is connected to the internet (with software “guardrails” to avoid horrible results) and knows current events.

Written by a Real Human. No chatbot was used in the creation of this article. I wrote it all myself.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending Articles