Since the launch of ChatGPT in November 2022, Generative Artificial Intelligence (“AI”) has taken the world by storm, attracting over 100 million users for both personal and professional use in less than one year. Given the growing popularity of ChatGPT and similar AI tools, it is likely that they will soon infiltrate your workplaces, if they haven’t already. Although AI holds the potential to enhance and make more efficient the way work is done across industries, there also are numerous pitfalls that employers need to be aware of and should address head on. Employers in nearly all industries thus should consider implementing AI policies that explain whether (or to what extent) they will allow employees to use AI at work, and what parameters will apply to internal or external tools to ensure ethical and practical use by their employees.
What is Generative AI?
AI platforms, like ChatGPT, work by collecting data from the internet and using language models to create new content, including audio, code, images, text, simulations and videos, based on a user’s prompt. They consume massive data sets of text, information and images from the internet and other sources, which are used to train those models to gradually “learn” and “understand” the relationship between words or data. Given its capacity to produce human-like content at the press of a button, AI has garnered both positive and negative attention.
Amazing Potential, But Caution Warranted
Without question, AI has the potential to revolutionize the workplace by enhancing employee capabilities, streamlining processes, creating efficiencies and promoting innovation across various industries. From automating content creation to generating code and assisting with customer interactions, the applications appear to be endless. The technology can save time and increase productivity, freeing up human resources to focus on more strategic, creative, complex and analytical tasks. Although AI tools hold immense potential, concerns exist regarding data privacy and security, implicit bias, intellectual property ownership and accountability – among other areas, particularly because these AI tools are relatively new and potentially unpredictable.
For example, if employees enter proprietary or confidential employer information into AI tools, the information could inadvertently be shared, thus potentially losing its legal protection. Another AI-related concern is that many models learn from the data they are trained on, which can inadvertently perpetuate biases in the data and lead to discriminatory outputs. Yet another challenge is determining accountability and responsibility for errors or unintended outcomes in content produced by AI systems. When AI generates content, it can be difficult to determine who is responsible for the final output. Further, although AI can produce impressive results, it is not immune to errors or inconsistencies. Unfortunately, it can be prone to “hallucinations” and providing outdated answers; thus, establishing accountability is crucial, especially in cases where generated content leads to negative outcomes or mistakes. For example, in June 2023, a New York lawyer garnered national attention when he admitted to submitting a brief using AI that contained citations to nonexistent cases. The AI tool even confirmed for the lawyer – incorrectly – that the cases were real when he asked that question. The lawyer was sanctioned, and now some courts are requiring lawyers to disclose use of AI in filings. Employers need mechanisms to review and ensure the accuracy of generated content, particularly if that content will be customer- or end user-facing or mission-critical. Finally, when an AI model produces a creative piece, code or innovative idea, the question of ownership and intellectual property rights becomes complex.
Implementing Effective Generative AI Policies
To mitigate these concerns, employers should establish protocols that govern employee use of AI in their workplace. The extent of acceptable AI use will depend on the industry and employer preferences, and some employers may choose to prohibit the use of AI tools altogether. For those employers who wish to allow the use of such tools, the key considerations include:
- Protecting Confidential Information: Prohibit employees from entering confidential employer information into AI tools to ensure that proprietary or confidential information remains protected and does not find its way into generated content.
- Identifying Potential Biases: Actively work to identify and address any bias that might arise in AI-generated content and select diverse and representative datasets to minimize bias, especially when it comes to legally protected characteristics such as individuals’ gender, race, religion, age and disability.
- Ethics Training: Provide employees with informative training about AI, its capabilities, acceptable uses and ethical considerations when working with AI tools.
- Attribution and Accountability: Clearly define ownership of content generated by AI and define expectations for crediting both AI and human contributors.
- Quality Control: Establish guidelines for reviewing and approving AI-generated content, ensuring that it meets your company’s quality standards.
- Adaptability: As technology evolves, policies should be adaptable and flexible, incorporating new insights and practices as they emerge.
AI offers an exciting frontier for workplace innovation, but its integration requires thoughtful consideration and well-crafted policies. Rather than viewing AI as a replacement for human employees, employers should focus on how it can complement and enhance human skills. Policies should encourage collaboration between AI and employees to achieve the best outcomes. By addressing issues of confidentiality, bias, ownership and collaboration, employers can harness the power of AI while ensuring ethical and responsible use. As the landscape of technology continues to evolve, establishing and updating these policies will be key to navigating this new era of work.
Finally, it is important for employers to remember that the EEOC has issued guidance on the use of AI in hiring and employment decisions (as we have previously discussed here and here); that guidance should be incorporated into any AI use policies and followed in the appropriate contexts.