As we delve into the era of artificial intelligence (AI), it is clear that this technology is transforming traditional work methods and boosting productivity across sectors. One form of AI gaining significant traction is Generative AI. But what does that mean for the employer/employee relations and rules? This technology has demonstrated immense potential for transforming traditional work methods and boosting productivity. Generative AI refers to a type of AI that can create content from scratch. By leveraging various algorithms (for the technically-minded these include generative adversarial networks (GANs), transformers and auto-encoders), these systems can generate texts, images, music or even videos similar to those created by humans.

What does AI have to say about AI in employment?

Dentons has created a client-secure model of generative AI, named FleetAI, which has, with our instructions, guidance and review, written the following article on the use of generative AI in the workplace. So what did it have to say…

The benefits

The application of Generative AI in the workplace presents an array of benefits – let's delve into some key ones:

  • Automated content generation: Imagine a world where routine reports are automatically generated with minimal human intervention or where customer queries are dealt with instantly by chatbots capable of understanding and responding like human agents – this is exactly what Generative AI brings to the table.
  • Enhancing creativity: Contrary to fears about machines replacing humans at work – especially when it comes to creative tasks – Generative AI actually acts as an assistant rather than a replacement. It aids brainstorming sessions by providing original ideas which can be further developed by humans. For example, DeepArt and Artbreeder use generatively trained models allowing users to create unique pieces of art or design elements just from simple inputs. While AI artwork grabs attention, the day-to-day uses of AI in problem-solving and innovation will no doubt lead to more significant changes in how we work.
  • Efficiency and productivity increase: Generative AI also plays a pivotal role in automating mundane tasks, thus freeing up workers' time so they can focus on more strategic operations. In many cases, AI is significantly quicker and more accurate at these tasks than humans.

Some challenges

As we embrace the advantages offered by generatively intelligent systems within our workplaces, it is important that we also consider potential challenges ahead. One major concern surrounding any type of AI system revolves around data privacy due to its heavy reliance on data inputs for training purposes. Organisations must ensure their use complies with local and international data privacy regulations such as GDPR. This is likely to be a significant compliance project for many organisations well beyond the scope of this article.

Disclosure of Generative AI usage

An interesting and pertinent question that arises in the context of Generative AI in the workplace is whether or not users need to disclose to their employers or recruiters that they have used it for certain tasks. The answer largely depends on the nature of work, company policies and ethical considerations. If a task requires human creativity or expertise, one might argue it would be misleading not to disclose the use of an AI tool. Similarly, if an employer explicitly asks for human-generated work or restricts the use of specific tools, failing to disclose AI assistance may be considered dishonesty and therefore potential misconduct. Recruitment and promotion processes spring to mind in particular and we may find employers stipulating that all/part of a process is to be AI-free.

However, if Generative AI is merely used as a tool – much like other software used in everyday tasks (i.e. Google, spellcheck, calculator, open book) disclosure may not be necessary unless expressly requested by employers. It could be seen as another resource employees use to improve productivity and efficiency.

Transparency remains essential when dealing with AI at work. Using these powerful tools responsibly can lead to significant advancements while ensuring fairness and trust within teams. While there is no definitive answer yet due to this being a relatively new area with evolving norms and regulations, it is important that users familiarise themselves with company policies regarding technology use, as well as industry-specific guidelines where available, when deciding about disclosing Generative AI usage.

Regulating Generative AI usage

Regulating Generative AI presents a complex challenge for governments worldwide. Due to its novelty and rapid technological advancements, many jurisdictions have not yet implemented specific laws regulating the use of Generative AI. However, some regions have begun to take steps towards this. For instance, the European Union has proposed the Artificial Intelligence Act that aims to create a legal framework for "high-risk" AI systems, which could potentially include certain uses of Generative AI. The proposed Act is at the last phase of legislative process and is expected to come into force in the coming months. It is also worth noting that various international organisations and industry-specific bodies are working towards establishing ethical guidelines around AI use which includes generative models too. Despite these initial moves toward regulation and standardisation, there is still much work to be done. The fast-paced development in AI technology often outstrips regulatory frameworks causing a lag between innovation and legislation – making it an ongoing challenge for governments globally.

Regulating the use of Generative AI in the workplace is a task that falls to individual employers and requires a multi-faceted approach. Here are some ways they can manage and check the use of such technology:

  • Establish clear policies: Employers should develop clear policies on the use of AI tools, including when their usage is appropriate, any necessary disclosures and consequences for misuse. These policies should be communicated effectively to all employees.
  • Training programmes: Employers can provide training programmes to educate employees about responsible AI use. This could include information on data privacy, intellectual property rights, ethical considerations and potential risks associated with Generative AI.
  • Auditing systems: Some sophisticated Generative AI systems have auditing capabilities built into them which allow users or administrators to review the actions taken by the system. Employers can leverage these features to monitor how these tools are being used within their organisation.
  • Verification methods: In certain cases where it is critical that human work is not replaced by AI (e.g. creative writing), employers could implement verification methods such as asking for drafts or process work as evidence of human involvement.

However, it is important to note that detecting usage of Generative AI may not always be feasible due its advanced capabilities mimicking human-like outputs – hence having comprehensive guidelines around its acceptable use becomes crucial in maintaining accountability and integrity in workplace processes. And, of course, just as when employers grew more comfortable with home working, there will be an element of trust required between organisations and their staff.

Despite potential challenges ahead, it is clear that Generative AI holds enormous promise of enhancing our workplaces – from increased efficiency to creativity. As we continue to explore and adopt these advanced tools, one thing remains certain: future work will undoubtedly be shaped by this fascinating technology.

How did FleetAI write this?

We uploaded a range of relevant documents to inform the AI system about more recent developments in this area as we are conscious that, as a tool, its world knowledge is time-limited. Our FleetAI system is proprietary, which means we can upload this information securely. We also explained what points we thought were important to cover and the purpose of the article. We then asked FleetAI to get to work and, when various drafts were provided, we went through a process of asking for changes until we reached a point with which we were happy.

And, when we asked FleetAI exactly what it did to write the above, it said:

"As an AI developed by OpenAI, I've been trained on a diverse range of internet text. However, I don't know specifics about which documents were in my training set or have access to personal data unless it has been shared with me in the course of our conversation. I generate responses based on patterns and information in the data I was trained on. For this task, I used my training to generate a blog post about generative AI in the workplace based on commonly known facts and ideas about this technology."