ChatGPT is promising (or threatening) to reinvent the way people work in a multitude of industries. How will this transformation affect the legal industry - are there risks to using it?
ChatGPT is an artificially intelligent language model designed to give human-like responses to various questions and prompts. Released by OpenAI in November 2022, the platform quickly gained immense popularity, reaching over one million users within five days of its release.
ChatGPT was trained on historic data up to 2021, limiting its knowledge base to that timeframe. It is currently in an offline state, and relies on conversations with humans to further its learning.
Users are free to use ChatGPT to generate their own content or to converse with it on a myriad of topics. Some of the immediate uses of ChatGPT include writing posts or documentation, programming or code generation, and customer support.
However, OpenAI’s content policy does not permit users to generate hateful, harmful, sexual, or deceptive content.
Use of ChatGPT needs to comply with the following policies:
- service terms;
- sharing and publication policy;
- usage policy; and
OpenAI’s sharing and publication policy indicates that users are allowed to share their interactions with ChatGPT on social media. However, users must first review their content prior to posting, indicate that the content has been generated by artificial intelligence, and ensure that they do not violate the content policy.
ChatGPT is trained on historic data and relies heavily on human interaction to learn new facts. This poses a massive problem for ChatGPT as humans may provide it with incorrect information, resulting in it learning an incorrect response to a question. Additionally, ChatGPT is not able to correct its own logic and reasoning, which may lead to situations where it provides a credible-sounding but incorrect answer.
Because ChatGPT is in its early stages, the model is heavily phrase-dependent. While one phrase may prompt a response from ChatGPT, a similar phrase may result in it declining to answer due to a lack of information. Moreover, ChatGPT was trained on verbose human interactions and conversations, and has a tendency to provide lengthy responses to questions, as opposed to concise responses.
Another issue that ChatGPT experiences is that its reasoning is not full-proof and on occasion, it does respond to harmful or explicit content, despite attempts to block such responses. An extension of this problem is found in the attempted use of ChatGPT by hackers ranging from unsophisticated “script-kiddies” to advanced cybercriminals. It has been reported that hackers have made attempts to use ChatGPT to generate malicious code or emails that can be used for phishing purposes. This has resulted in ChatGPT being banned in various countries, including Russia.
With technological development fast outpacing growth in the legal industry, many are curious about ChatGPT’s implications for lawyers and the legal industry as a whole. A few immediate legal use cases include:
- legal research;
- drafting of emails and legal documents;
- contracts and document review; and
- answering legal questions or defining legal concepts.
The model on which ChatGPT is based (GPT-3), is to be utilised by the DoNotPay chatbot in February 2023, to assist in defending against a speeding fine in the US. The chatbot is set to listen to all communications and audio within the courtroom and to communicate to the defendant whether they should argue certain points or remain silent.
Despite ChatGPT’s possible uses in the legal field, it is not without risks or potential problems. As ChatGPT has limited data, it is currently not capable of learning new legal judgments or analysing new amendments to legislation. The amount of South African jurisprudence that ChatGPT was trained on, if any, remains unknown. Therefore, there is the risk that ChatGPT will provide outdated or inapplicable responses to legal questions since it has only been trained on pre-2022 data.
Although there are some immediate and foreseeable legal use cases in which ChatGPT can be used, it is crucial to remember that it is not a substitute for a trained lawyer. It will remain any legal practitioner’s duty to ensure that they review all responses from ChatGPT and make known to the public that artificial intelligence has been used in that specific instance.
Because ChatGPT has not been specifically designed as a legal tool and its training is not focused on legal data, there is a large scope for error in its responses. This is further compounded by the fact that ChatGPT is trained on historic data and is relying on human feedback to learn new information. Lawyers are advised to use ChatGPT with circumspection so as to avoid any potential risks for clients and issues of unauthorised practice of the law.
From a data privacy perspective, ChatGPT interacts with various users which creates the risk that sensitive or personal information may be shared by one user without the respective data subject’s consent. There is a risk that such personal information could be used to further train ChatGPT. As lawyers are frequently dealing with clients’ sensitive or personal information, one is cautioned not to readily use such information or data when interacting with ChatGPT.
Lastly, ChatGPT has been trained on vast quantities of data, which may have included proprietary material. Some are concerned that ChatGPT may use such material in a response, despite not being licensed by the copyright owner.
Therefore, the main issues surrounding ChatGPT in the legal field include quality control, privacy issues, and the fact that it is not specifically trained in the law. However, despite the above, one can foresee that it will be critical for all to understand and be able to use ChatGPT as an aid or supplement to their everyday work. With regards to the legal discipline, one can foresee that lawyers who have taken the time to understand ChatGPT, will be able to utilise it for everyday tasks, such as drafting emails or determining an answer to a legal issue.
If any business wishes to utilise ChatGPT in their operations, a full risk assessment must be conducted.