This article is an extract from In-Depth: Artificial Intelligence Law - Edition 1. Click here for the full guide.


I Responsible AI and technology governance

HASTE

Artificial intelligence (AI) is a wide and varied category of technology that is famously difficult to define, and for which definitions therefore differ. 'Responsible AI' (RAI) is a subset of AI that meets certain process or technical criteria, whether articulated in (mostly forthcoming) regulations, official guidance, industry guidelines or organisational policies.

For purposes of this chapter, AI means the specific algorithmic model (technology) as well as the related data and human systems that enable it to be commercialised and deployed. For this reason, it can be useful to think about the responsible use of trustworthy technology and to consider all governance principles as they relate to both the human being and the machine.

On the whole, RAI is assumed to at least accord with relevant legal requirements, and especially while the regulatory world continues to evolve, is assumed to meet criteria above and beyond lawfulness. Indeed, the pace at which AI is evolving means that regulations will always be lagging or so general as to leave a lot of room for interpretation and 'private' rules.

Filling this gap are more than 200 sets of principles to guide the responsible development, use and governance of AI. These sets of principles (and corresponding toolkits, guidebooks and road maps) have been articulated by industry (e.g., Microsoft, Google, Business Roundtable), civil society (e.g., World Economic Forum, The Data & Trust Alliance), government bodies (in, for example, the United States, the European Union, the United Kingdom and China) and global entities (e.g., the Organisation for Economic Co-operation and Development (OECD) and the United Nations Educational, Scientific and Cultural Organization (UNESCO)).

Though these sets of principles differ2 in how they are articulated and the order in which they prioritise attributes of responsible technology, they share the following core tenets, which can easily be remembered as HASTE:

  1. Human-centred – refers to AI that accords with essential human rights and needs, including autonomy. In some jurisdictions (notably the European Union), human-centredness is a developed and enforceable legal concept, whereas in others (notably the United States), beyond limited constitutional and legal rights, it is more of a prudential concept. In either case, RAI assumes that some appropriate level of attention and accommodation is made to protect the rights of human beings affected by the technology. Related concepts include fairness and privacy.
  2. Accountable – refers to AI for which individuals or teams are expressly responsible for its impact. Who is accountable will differ by organisation, many of which are still sorting out how to assign overall AI responsibilities. Accountability is critical in the legal context, to fulfil due process rights on the front end and to ameliorate any harms caused on the back end. Related concepts include transparency and explainability.
  3. Safe and secure – refers to AI that does not harm the user or permit cyber or physical security breaches or data leakage. Related concepts include robustness, resilience, accuracy and quality.
  4. Transparent and explainable – refers to AI of which it is possible to understand the general or specific workings, and that are knowable and disclosable to human developers and users. Some AI operates in a black box, wherein it produces outputs that cannot be traced back to the model inputs or logic. Transparency can relate to the technology itself or to the system or human process into which the technology is put. Explainability is the ability to communicate effectively about the workings of an AI model to human beings. Related concepts include disclosure, audits, assessments and interpretability.
  5. Ethical and fair – refers to AI that meets prevailing ethical and related standards. Related concepts include privacy and safety.

Although these terms refer to human values and principles (human rights, non-discrimination, etc.), they do not indicate that these values have been engineered into a model or system. Without additional context, policies and process, RAI principles are aspirational. In other words, these principles require the use of a mixture of human processes and technical efforts to select, design and monitor AI in a manner consistent with human, social, political, cultural and legal values. How these principles are ordered and articulated will therefore vary across jurisdictions and organisations, and by extension, use cases; not only will the context of the application determine what is meant by the principles but may also require that certain principles are weighed more heavily than others.

II Best practice for RAI governance

i Governing across the life cycle

The AI life cycle consists of four primary stages:

  1. Use case selection – identifying with particularity the applied purpose (or purposes) or goal (or goals), articulating the AI tools proposed for use and why they are preferable to a non-technology solution, and defining the context for the technology's application. AI-powered tools are often favoured for their ability to increase accuracy, efficiency and cost savings, create new work opportunities, or automate repetitive tedious tasks. At this stage, identifying use cases carefully will also inform the sorts of resources, governance and skills that are required. Likewise, those acquiring AI tools should consult procurement standards.
  2. Design and selection – designing or procuring an AI model that is fit for purpose and appropriate for the use case context in which the system will be used. Governance mechanisms will depend on the AI technique or model selected, such as natural language processing, sentiment analytics, image recognition, predictive analytics, chatbots, augmented reality, robotic process automation or generative AI.3 Critically, these decisions will be informed by the quality and extent of the data at hand, as well as the extent and reliability of the consents obtained for using that data.
  3. Development – engineering the AI model, selecting and preparing appropriate data sets, and training and testing the model. High quality data refers to that which was collected with the proper consents or approvals, is representative and clean, and preserves privacy. AI models require three distinct data sets at three distinct stages: (1) training data, or historical data used to teach an AI model logic and pattern recognition; (2) test data, against which one can evaluate how well a trained model is performing once built; and (3) production data, ingested by the model once deployed for operational or commercial use.
  4. Deployment, monitoring and decommissioning – running the AI system in its real-world applied setting, training users and field engineers, and assessing (and, as needed, adjusting) how the AI system performs relative to its defined goals and when it has exhausted its utility or purpose. Once deployed, AI models are susceptible to drift, or diverging from their set instructions or intended domains and purposes for use. This can occur for several reasons, including because the model was exposed to new data, learning a new pattern, or because it received feedback or reinforcement from human beings in its deployment environment, potentially causing it to reweigh or reprioritise information in a manner that alters the outputs it produces. It is critical, therefore, to monitor the system post-deployment. Likewise, it is critical to assess the shelf-life of any particular model, and what the necessary decommissioning schedule and processes ought to be. This can be important to correct errors, mitigate risk or improve operation.

The human and social systems surrounding the AI models are critical to their performance in applied settings. For this reason, it is helpful to distinguish between AI models (composed of algorithms and data) and AI systems (which include the models as well as the human beings, organisations, institutions and other technologies involved in the AI life cycle). Currently, most AI augments, rather than fully replaces, human functions, highlighting the importance of the human–technology interaction, and how well an AI model was designed, taught, reinforced, monitored, etc.

ii By-design mindset

During the past few years, governance (on a variety of fronts, including privacy, ethics and security) has emerged as a strategic function, rather than something bolted on at the end to satisfy compliance obligations. By building the policies, practices and controls alongside the AI model itself, teams can increase the likelihood that their model will operate and perform reliably and as intended in its applied setting, or derive accurate insights and integrate with other tools. Governance-by-design therefore presents a strategic opportunity for organisations.

iii Journey-not-a-destination approach

The technology is evolving quickly, as are the human processes and governance workflows that surround it. As a result, RAI is never 'done' but, rather, an ongoing practice.

iv Continuous training, education, communications

AI is already augmenting how human beings work, interact, navigate, create, receive and process information. Education and adaptation are an ongoing requirement. Continuously building capabilities, new skills and resilience in people should become the norm. This includes capabilities around oversight and accountability and appropriate responsibilities (and professional duties of care).

v Multi-stakeholder participation and review boards

It is widely understood that RAI is a team sport, requiring meaningful input from a variety of perspectives, domains of expertise and lived experiences. One organisational tool (in business and government) for bringing together these perspectives (beyond focus groups at the design stage) is some form of a review board. Review boards vary in their composition, focus or mandate, membership skill set and seniority, and authorities. Some boards are empowered to advance priority AI topics and issues, and perhaps to approve or prohibit use cases across the entire enterprise, and conduct investigations into third-party technology vendors. Others are more narrowly tasked with benchmarking industry best practices, hosting training sessions and articulating internal AI use policies.

III Intersections with data and cyber

AI is already integrated across the economy and a wide range of human activity, and increasingly does not exist in isolation from other issues and risk areas. RAI takes account of this, especially in the context of cybersecurity, data protection and privacy. Often, these principles and values are presented either in opposition to one another (e.g., transparency versus privacy; broad access versus security), or best case, in discrete silos. In reality, however, they are intricately related and need to be developed and advanced together. Because most organisations put these topics into operation and train in silos, overt steps are required to integrate and bring coherence to the suite of digital issues. As noted above, review boards are one such device, but improving information flow at all levels is necessary.

IV International efforts and standards

Despite all the discussion and publishing activity, there remain only very high-level international agreements on RAI and no agreed legal standard for what it means in practice. As this and the following chapters will discuss, existing and emerging laws and regulations will be applied to AI and activities powered by AI, and likewise, community standards are also beginning to evolve.

On 26 November 2023, the UK National Cyber Security Centre and the US Department of Homeland Security's Cybersecurity and Infrastructure Security Agency jointly released 'Guidelines for secure AI system development'.4 At the time of writing, agencies in 18 countries5 have agreed to these recommendations for companies to develop or deploy (or both) AI systems that are 'secure by design', which a senior US official described as 'the first detailed international agreement on how to keep artificial intelligence safe from rogue actors'.6

The guidelines cover four key areas: secure design; secure development; secure deployment; and secure operation and maintenance. Among other things, they outline considerations for monitoring abuse, protecting data from tampering and vetting software suppliers. Critically, the guidelines do not address appropriate uses of AI or the collection of model training data.

i G7 Hiroshima process

In September 2023, the OECD released a report7 presenting the results of a questionnaire to G7 members,8 including their self-reported policy instruments. Responses and trends are indicative of the first half of 2023. On 11 October, a set of 11 principles, agreed to by G7 ministers, were released for stakeholder consultation.9

On 30 October, the European Commission published two nearly identical artefacts from the process: the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems10 and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.11 Both sets of principles apply across all stages of the AI life cycle and will be updated and reviewed as necessary. Unlike the International Code of Conduct, which is intended for relevant organisations (e.g., academia, civil society, the private sector or the public sector), the Guiding Principles apply to 'all AI actors' and will be developed further, with input from countries and academic, business and civil society stakeholders, as part of a comprehensive policy framework.

ii Organisation for Economic Co-operation and Development

The OECD Policy Observatory has identified more than 700 global AI policy initiatives from 60 countries, territories and the European Union. In May 2019, the OECD adopted its Principles on Artificial Intelligence, which promote AI that is trustworthy and respectful of human rights and democratic values. Though voluntary, like many of the other sets of global principles, the OECD's principles have since been readily adopted by OECD Member States and others, and this work has formed the basis of other initiatives (including that of the G7).

iii UNESCO GPAI

In November 2021, at the UNESCO General Conference, 193 Member States adopted the Recommendation on the Ethics of Artificial Intelligence,12 the first global standard-setting instrument on the subject.13 The Recommendation put forth a series of values and principles aligned with the UN Sustainable Development Goals:14

  1. Values:
    • respect, protection and promotion of human rights and fundamental freedoms and human dignity;
    • flourishing environment and ecosystem;
    • ensuring diversity and inclusiveness; and
    • living in peaceful, just and interconnected societies.
  2. Principles:
    • proportionality and 'do no harm';
    • safety and security;
    • fairness and non-discrimination;
    • sustainability;
    • right to privacy and data protection;
    • human oversight and determination;
    • transparency and explainability;
    • responsibility and accountability;
    • awareness and literacy; and
    • multi-stakeholder and adaptive governance and collaboration.

In November 2022, the UNESCO Global Partnership on Artificial Intelligence (GPAI) released its second 'Responsible AI Working Group Report'.15 The Working Group comprises 64 experts from 25 countries who 'contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals'.16 In 2022, the Working Group pursued several projects about AI and climate, social media governance, public-domain drug discovery and pandemic resilience. In July 2023, GPAI, UNESCO, VDE17 and AI Commons18 put out a call for partners to help build a 'Global Challenge to Build Trust in the Age of Generative AI'.19 This global competition aims to 'promote trust by equipping governments, organisations, and individuals to be resilient in the era of scalable synthetic content'.20 To date, no further details about the challenge have been shared.

iv Declaration on use of AI in military applications

As at 13 November 2023, 46 states have endorsed the implementation of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, announced on 9 November 2023.21 The Declaration highlights 10 measures that endorsing states believe should be implemented in the development, deployment and use of military AI capabilities, including autonomous functions and systems. To further the Declaration's objectives, endorsing states are publicly committing to the Declaration and (1) sharing details about how they are implementing these measures, (2) engaging in discussions about the responsible and lawful development, deployment and use of military AI capabilities, and where these measures may require refinement or additional measures, and (3) engaging with the international community to promote these measures and related and supporting efforts.

v Business Roundtable

Business Roundtable is a consortium of US-based chief executive officers from more than 200 public companies representing all sectors of the US economy. In early 2022, it released a road map outlining a set of 10 core principles for responsible AI, and policy recommendations for the US administration to consider when regulating AI. These documents serve as a general framework.22

vi Standards-setting bodies

The International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) are among several bodies working towards the development of internationally harmonised standards for AI. The voluntary standards they produce are industry- and sector-agnostic and broadly applicable to any organisation, regardless of size, type, resources or function. This flexibility allows both public and private organisations to tailor the standards to their specific organisational and technical needs, purposes and obligations. Specifically, ISO's 42000 series of AI management (versus technical) standards is likely to be adopted by a variety of organisations as a starting point. Likewise, IEEE's 7000 series will also influence AI management and quality. Both series are steps forward but remain sufficiently high level, requiring more work to be done to understand how they will apply in practice.

V Corporate governance implications

The functions and purposes of corporate boards include duties of care and to be informed, and responsibilities for strategic planning and risk management. Without question, the effects and implementation of AI will influence these functions and purposes. Boards and directors therefore need to consider how they are being resourced on these issues, the questions they need to be prepared to ask and deliberate on, their composition and their process. On the latter point, it is worth assessing how board agendas are likely to include strategic questions and learning sessions on emerging technologies.

VI Key technology-related trends in AI (2024)

If 2023 was any indication, it will be impossible to fully appreciate in advance what the trends will be. That being said, reviewing the landscape in 2023, it is reasonable to assume that the following will emerge as important issues.

  1. Generative AI: without question, generative AI was the key development in 2023. It has put into the hands of every person with a phone the ability to use sophisticated AI tools for their individual purposes. The technology community acknowledges that the available tools are rudimentary relative to where they are going and how they will evolve over time. Still, they are powerful and starting to reshape workflows, schools and how we think about human creativity and intellectual property. We are at the beginning of this transformation in practice and in law, so it is important to watch this space.
  2. Data control mechanisms and privacy enhancing technologies: the range, power and utility of technologies (and techniques) designed to help users collaborate in a trusted way across otherwise restricted or sensitive data sets, will increase. These technologies include, but are not limited to, the use of synthetic data, homomorphic encryption, data trusts, federated learning and clean rooms. More generally, the commercial and compliance needs to protect certain sensitive information will drive the development of this class of tools and techniques.
  3. Metaverse connected devices (extended reality (XR), virtual reality (VR) and augmented reality (AR)), the internet of things and smart cities: arguably this set of technologies had its hype moment in 2021–2022 and since has been steadily (and more quietly) continuing to improve in terms of power and utility. These will re-emerge, especially in the areas of AR tools, integrated devices and high-performance computing and network management.
  4. Convergence of AI with blockchain, high-performance computing, robotics, CRISPR,23 materials science, etc.: the integration of generative AI and classical AI will accelerate developments in other advanced technology areas and start to create 'AI-first' experiences.
  5. Identity and authentication: the ability to authenticate people and content will become essential for smoothly functioning democracies and economies. Convincing deepfakes and security challenges will strain systems that rely solely on human beings' assessments of authenticity. New tools are in development on this front, including, but not limited to, watermarking and other data provenance trackers, but these are not yet entirely ready nor are they complete solutions.
  6. National security and cybersecurity: concerns about AI-fuelled threats to nation states and critical infrastructure will continue, as will the development of country-level or bloc-level regulations and commercial restrictions.
  7. Sustainability: the energy demands of certain technologies challenge (and compete with) certain sustainability goals. At the same time, some other technologies drive efficiencies, for instance in how to run and protect energy grids.