Where once discussions about AI were filled only with excitement at the advances already made and the seemingly endless possibilities still to come, it is now increasingly rare for AI to be mentioned without a note of caution. Safety is the new buzz word: everyone involved with AI wants to show that they are the most safety-conscious and to come up with "world first" approaches to making AI development and use as safe as possible for all concerned. This article explores how the big tech companies and regulators are competing amongst themselves over this and what key developments there have been so far which anyone using, or considering using, AI in their business should be aware of.

Introduction

Risk and safety have not typically been headline-grabbers in the tech world but they are very much the focus of the majority of recent media coverage about AI. Unusually, this is not only thanks to regulators and governing bodies, with the UK's AI Safety Summit the most high profile event, but also to the companies actually developing the technology.

The race of the tech companies

In March this year, over 1,000 technology leaders and researchers, including Elon Musk and Steve Wozniak, warned in an open letter that AI presents "profound risks to society and humanity" and called for a pause in the development of powerful AI to give time to introduce "shared safety protocols". A couple of months later, Sam Altman, CEO of OpenAI, told US Congress: "My worst fear is that we, the industry, cause significant harm to the world" and appealed to them to regulate AI. This was swiftly followed by a one sentence statement signed by a group of industry leaders including Altman, CEOs of two of the other leading AI companies and Bill Gates: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war".

Many have questioned the motives behind some of these warnings, but it seems that the big AI companies are no longer only competing to make the best products and greatest advances but are also in a newer competition to be regarded as the AI provider which cares the most about keeping its customers, and indeed society as a whole, safe from the potential harm that their creations may unleash. One example of this, as we set out here, is the recent introduction by Google, Microsoft and OpenAI of indemnities to their standard terms, something which was in each case given greater publicity than you might usually expect from an update of T&Cs. Another is that in the last few months, fifteen of the leading AI companies, including Amazon, Meta and IBM, have made voluntary commitments to the US government to help advance the development of safe, secure, and trustworthy AI. Slightly more recently, six of them published safety policies in response to a request from the UK government.

The race of the regulators and governing bodies

This leads us nicely to the parallel race for AI safety being run by the more likely competitors: regulators and governing bodies. Here, terms like "world's first" and "landmark" seem to pepper every announcement which is made. Most recently, we have had a number of "firsts" publicised by the UK government, including the AI Safety Summit ("the first global AI Safety Summit"), the Bletchley Declaration on AI safety ("a world-first agreement at Bletchley Park") and the AI Safety Institute ("world's first"). Not to be outdone, days before the UK's AI Safety Summit, the US issued a "landmark Executive Order" on safe, secure, and trustworthy AI "to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence". In the meantime, the EU is the only one of the three to be racing to publish the "first regulation on artificial intelligence" as the others have, at least for now, conceded that they need to do more to understand the risks before they can produce any meaningful legislation of their own. The other participant in this regulatory race is China, whose authorities have been considering AI regulation since at least 2017 when they launched a New Generation Artificial Intelligence Development Plan, and who have already published some specific regulations regarding algorithms and deep synthesis and in August put into effect provisional rules for the management of AI services.

Key developments which have come out of the race so far

So, where has all this competition got us? Various declarations and commitments have been made and various institutes and taskforces have either been set up or are on their way to being set up. However, if you are using or considering the integration of AI into your business operations, of more practical note are the guidance, principles, and draft legislation which have been published. We would draw your attention to the following in particular:

United Kingdom

Frontier AI: Capabilities and Risks - Discussion Paper

  • prepared as a discussion paper for the UK AI Safety Summit

ICO Guidance on AI and Data Protection 

  • provides advice on how to interpret relevant data protection law as it applies to AI, and recommendations on good practice

European Union

Draft EU AI Act 

  • as set out above, intended to be the first piece of AI legislation when it is published either later this year, or more likely, next year

United States

Blueprint for AI Bill of Rights

  • a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public

Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence

  • orders various actions, including the establishment of new standards for AI safety and security

China

Interim Measures for the Administration of Generative Artificial Intelligence Services

  • apply to the provision of generative AI services to the public in China

United Nations

Principles for the Ethical use of Artificial Intelligence in the United Nations System

  • Set of ten principles, grounded in ethics and human rights, which aims to guide the use of AI across all stages of an AI system lifecycle 

G7

International Guiding Principles for Organisations Developing Advanced AI Systems and International Code of Conduct for Organisations Developing Advanced AI Systems

  • Intended to complement the EU AI Act

While each of these are either in draft form or not legally binding, they provide a useful insight into the risks being focused on and are therefore a good indicator of what you should be considering, and the types of measures you should be starting to put in place, if you are using, or considering using, AI in your business.

It is also important not to forget existing non-AI specific legislation which is nonetheless relevant to use of AI, such as the UK GDPR, the Copyright, Designs and Patents Act 1988 and the Equality Act 2010.

Next Steps

Although there are not yet any AI-specific rules with legal force, a clear message has emerged: Safety first! As such, if you are using or considering using AI in your business, you need to approach it with this message in mind. It would also be worth starting to consider whether your supply chain is making any use of AI and, if so, whether they are also putting safety at the forefront.