Introduction

Trust is at the core of most, if not all, successful ventures, initiatives and relationships. This human concept extends to the digital realm, and the advancement of artificial intelligence ("AI") solutions into an ever-increasing number of everyday tasks brings the issue of trusted AI models clearly into the limelight.

Seeking to ensure that generative AI is utilised in a safe and responsible manner where trust is sustained, the Infocomm Media Development Authority ("IMDA") and the AI Verify Foundation announced, on 31 October 2023, the first of its kind Generative AI Evaluation Sandbox ("Sandbox"). The Sandbox will bring together key global players to build capabilities in the testing and evaluation of generative AI and is part of efforts to have a common standard approach to assess generative AI. The Sandbox utilises a new draft Evaluation Catalogue that sets out common baseline methods and recommendations for Large Language Models ("LLMs").

In this Update, we briefly explore the risks associated with generative AI models, the key aspects of the Sandbox and how Rajah & Tann can help you successfully navigate the myriad of issues relating to the adoption of AI solutions to meet your business needs.

Current Concerns Regarding LLMs

Generative AI refers to AI models that learn the underlying distribution of the data they are searching and then generate new content from that learned distribution. ChatGPT is an example of an extremely popular generative AI model.

Generative AI is being utilised in industries ranging from fashion to healthcare. However, our understanding of the risks of generative AI is still evolving. In the discussion paper "Generative AI: Implications for Trust and Governance" ("Discussion Paper"), IMDA detailed the key risks and harms of LLMs and proposed ideas for senior leaders in government and businesses on building an ecosystem for the trusted and responsible adoption of generative AI.

Key risks posed by generative AI highlighted in the Discussion Paper include:

  • mistakes and "hallucinations";
  • privacy and confidentiality;
  • scaling disinformation, toxicity and cyber-threats;
  • copyright issues;
  • ethical issues such as embedded biases which echo in downstream applications; and
  • values, alignment and the difficulty of good instructions.

Endeavouring to address some of these risks, by working towards a common standard approach to assess generative AI and support the safe and trustworthy adoption of generative AI, IMDA is inviting industry partners to collaboratively build evaluation tools and capabilities in the Sandbox.

Key Aspects of the Sandbox

The Sandbox will utilise a new Evaluation Catalogue ("Catalogue"), as a shared resource, that details common baseline methods and recommendations for LLMs. The AI Verify Foundation welcomes initial comments and feedback on this draft Catalogue at [email protected]. To learn more about the AI Verify Foundation, please read our Legal Update regarding its launch here.

Key model developers (including Google and Microsoft), application developers with concrete use cases (including DataRobot and OCBC) and third-party testers (including Deloitte and EY) have joined the Sandbox. The full list of participants is available here. The Sandbox participants will assist in creating a more robust testing environment.

Key aspects of the Sandbox are:

  • Offering a common language for the evaluation of generative AI through the Evaluation Catalogue. By offering a research-based categorisation of current evaluation benchmarks and methods, the Sandbox will provide a baseline for the evaluation of generative AI. The Evaluation Catalogue provides a base by compiling the existing commonly used technical testing tools and organising these tests according to what they test for and their methods, and recommending a baseline set of evaluation tests for use in generative AI products.
  • Creating a body of knowledge covering how generative AI products should be tested. The Sandbox will help build evaluation capabilities beyond what currently resides with model developers. As the testing of generative AI should also include the application developers who build on top of the models, the Sandbox will involve players in the third-party testing ecosystem. This will enable model developers to understand what external testers require in responsible AI models. Where possible, each Sandbox use case should involve an upstream generative AI model developer, a downstream application deployer and a third-party tester to demonstrate how the different players in the ecosystem can collaborate. Moreover, by involving relevant regulators, such as the Singapore Personal Data Protection Commission (PDPC), the Sandbox allows for transparency regarding the needs of all parties along the supply chain.
  • Developing new benchmarks and tests. The Sandbox use cases will likely reveal gaps in the current landscape of generative AI evaluations, particularly in currently underdeveloped domain-specific areas, such as human resources or security, and cultural-specific areas. The Sandbox will develop benchmarks for evaluating model performance in specific areas that are important for use cases, and for countries like Singapore that have cultural and language specificities.

Interested model and application developers, and third-party testers are invited by AI Verify Foundation and IMDA to participate in the Sandbox.

The Sandbox allows relevant parties to create trusted generative AI models that will have meaningful application in the Asian, as well as global, context. This development complements Singapore's evolving regulatory approach to AI. You may read about a recent proposed regulatory development in our Legal Update here, which relates to data protection issues arising from the use of AI systems.

Concluding Words

The Sandbox is an important step in the journey towards building safe and trustworthy generative AI models. This exercise is reflective of Singapore’s approach of engaging actively with AI stakeholders and providing a sandbox is the best way to deepen such sharing and learning through the work on the AI body of knowledge. This approach will also allow Singapore to develop protocols and standards with the AI community and international standards bodies, which is a necessary precursor to ensure that operational compliance can be achieved for any upcoming AI related laws and regulations in Singapore.

In this regard, the multi-disciplinary approach of our Data and Digital Economy model allows Rajah & Tann to prepare clients for the legal challenges of using generative AI models.

Apart from assisting clients in navigating regulatory and legal frameworks relevant to the use of AI models, our lawyers can also provide greater value-added legal services in collaboration with technical specialists in our R&T Technologies and R&T Cybersecurity team to offer multi-disciplinary and multi-legal solutions for clients.