Singapore has positioned itself as a leading jurisdiction for digital transformation and artificial intelligence (“”) governance through a deliberately pro-innovation regulatory model. Rather than adopting a single, binding AI statute, Singapore relies on a combination of national strategies, voluntary governance frameworks, sector-specific guidance, and practical implementation tools. This approach reflects a policy choice to encourage responsible AI adoption while preserving flexibility as technologies evolve.
For businesses and legal practitioners operating in or from Singapore, AI compliance is not a matter of satisfying a single “AI Act,” but of navigating an ecosystem of interlocking frameworks and sectoral expectations. This article outlines that ecosystem, explains how digital and AI governance operate in practice, and highlights most relevant developments.
Regulatory Architecture and Coordination
Singapore’s digital and AI governance framework is intentionally distributed across multiple public authorities. The Ministry of Digital Development and Information (“MDDI”) sets overall strategy for digital development, public-sector digitalisation, and national AI policy. The Infocomm Media Development Authority (“IMDA”), a statutory board under MDDI, plays a central operational role in developing AI governance frameworks and technical tools.
Data protection and personal data governance fall under the purview of the Personal Data Protection Commission (“PDPC”), which administers the Personal Data Protection Act 2012 (“PDPA”) and issues guidance on AI systems that process personal data. Sector-specific regulators complement this baseline governance. In financial services, the Monetary Authority of Singapore (“MAS”) shapes expectations around AI-driven analytics and decision-making. In the legal sector, the Ministry of Law (“MinLaw”) addresses professional and ethical issues arising from generative AI.
These authorities operate in a coordinated manner, relying primarily on guidance, incentives, and technical standards rather than prescriptive legislation. This reflects Singapore’s broader regulatory philosophy of encouraging adoption through clarity, trust, and collaboration.
MDDI and National Digital & AI Strategy
MDDI plays a central role in aligning domestic digital policy with Singapore’s international engagement on AI governance. In September 2025, it launched the Singapore Digital Gateway (“SGDG”), a centralised platform consolidating more than 30 digital and AI governance resources developed through Singapore’s digital transformation journey. SGDG is designed not only for domestic use, but also to support policymakers, regulators, and multilateral organisations globally.
Currently, SGDG focuses on two foundational domains: (a) the AI and (b) the digital government. The AI domain includes but not limited to Singapore’s National AI Strategy 2.0 (“NAIS 2.0”), the AI Verify Testing Framework and Toolkit (“AI Verify”), and Project Moonshot, a structured toolkit for testing and evaluating large language models, including risks such as hallucinations, bias, and safety failures. It also features the AI Playbook for Small States, developed jointly with Rwanda, which translates Singapore’s AI governance experience into practical guidance for jurisdictions with limited regulatory capacity. These initiatives position Singapore as a contributor to global AI governance infrastructure, rather than a passive rule-taker.
The Digital Government domain includes Singapore’s Digital Government Blueprint, the Singpass national digital identity system, and open-source platforms such as FormSG and Isomer that support secure digital service delivery. MDDI has indicated that SGDG will be progressively expanded to cover additional areas such as cybersecurity, online safety, smart cities and the digital economy, positioning Singapore’s resources as “Tech for the Public Good” and sharing practical tools with countries at different stages of digital development.
IMDA: AI Governance Frameworks and Tools
IMDA is the primary architect of Singapore’s AI governance frameworks. Together with PDPC, it developed the Model AI Governance Framework (“MAIG”), first issued in 2019 (second edition released in 2020), as voluntary, cross-sector guidance for responsible use of AI. The MAIG focuses on internal governance structures, human oversight, risk management, transparency, and stakeholder communication, and encourages organisations to adopt measures proportionate to the risks posed by their AI use cases.
IMDA has progressively extended this framework to address new AI types. In 2024, it released the Model AI Governance Framework for Generative AI, which addresses risks associated with large language models and generative systems, including hallucinations, bias, intellectual property, content provenance, cybersecurity, and systemic risk. Most recently, in January 2026, IMDA introduced the Model AI Governance Framework for Agentic AI, addressing governance challenges posed by autonomous or semi-autonomous AI agents capable of independent decision-making. These developments place Singapore among the first jurisdictions to articulate structured governance guidance for advanced AI systems.
A defining feature of IMDA’s approach is its emphasis on operationalisation and interoperability. AI Verify, originally developed by IMDA, is a testing and assurance framework that enables organisations to assess AI systems against recognised governance principles through process checks and technical tests. To reinforce global collaboration, AI Verify is overseen by the AI Verify Foundation, a not-for-profit entity and a wholly owned subsidiary of the IMDA, that manages the framework as an open-source project and convenes a global ecosystem of contributors. This structure highlights that AI Verify is not a rigid government compliance checklist, but a collaborative assurance mechanism designed to evolve alongside international standards.
IMDA has also developed a crosswalk mapping Singapore’s AI governance frameworks to international standards such as the US National Institute of Standards and Technology (NIST) AI Risk Management Framework. These mappings are particularly relevant for multinational organisations, as they reduce compliance friction and signal that alignment with Singapore’s voluntary frameworks is likely to support broader global compliance efforts. Although formally non-binding, IMDA’s frameworks and AI Verify increasingly function as benchmarks in procurement, contracting, and regulatory discussions.
PDPC: Data Protection in AI Systems
Data protection remains the primary source of binding legal obligations for AI systems in Singapore where personal data is involved. The PDPC administers the PDPA, which applies across the AI lifecycle, from data collection and model training to deployment and monitoring.
In March 2024, the PDPC issued the Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems. These guidelines clarify how PDPA obligations apply in different contexts. For development and training, they explain how organisations may rely on consent or statutory exceptions such as those for research or business improvement, subject to safeguards and accountability measures. For business-to-consumer (“B2C”) deployment, the guidelines emphasise meaningful notification and transparency, including explaining how AI-enabled features operate and affect individuals, with consent or appropriate exceptions supported by additional safeguards. For business-to-business (“B2B”) arrangements, the guidelines address the role of service providers as data intermediaries, highlighting expectations around contractual controls, data protection, retention, and support for data controllers’ compliance obligations.
This B2B focus is particularly relevant for professional services firms and technology vendors. The PDPC’s guidance reinforces that organisations remain accountable for personal data even when AI decisions are automated or outsourced, and that AI governance should be integrated into existing data protection and risk management programmes.
MAS: AI Governance in Financial Services
MAS has adopted a sector-specific, pro-innovation approach to AI governance in financial services. Rather than issuing AI-specific regulation, MAS integrates AI considerations into its broader supervisory framework covering technology risk management, outsourcing, conduct, and market integrity. The Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) continue to shape expectations around explainability, bias mitigation, human oversight, and accountability in AI-driven decision-making.
MAS’s approach combines supervision with incentives. Through the Artificial Intelligence and Data Analytics (AIDA) Grant under the Financial Sector Technology and Innovation (FSTI) Scheme (valid until March 2026), MAS co-funds financial institutions’ adoption of AI and data analytics, subject to criteria relating to governance, capability-building, and workforce impact. This reflects a regulatory philosophy that supports responsible AI adoption rather than merely constraining risk.
MinLaw: Generative AI in the Legal Sector
In September 2025, MinLaw has issued the “Public Consultation on Guide for Using Generative Artificial Intelligence in the Legal Sector” to seek feedback from legal professionals and the public on a proposed “Guide for Using Generative AI in the Legal Sector”. The proposed guide emphasises that lawyers remain fully responsible for their work product and must comply with core professional obligations, including competence, confidentiality, transparency, and duties to clients and the courts.
The guide further encourages law practices to adopt structured governance measures, including internal policies, tool evaluation processes, training, and ongoing oversight. It reflects Singapore’s broader approach of adapting general AI governance principles to sector-specific ethical and professional contexts.
Digital Governance Beyond AI
Singapore’s AI governance model sits within a wider digital governance framework that prioritises trust and security. Initiatives such as the Digital Government Blueprint, Singpass, and secure digital service platforms demonstrate a long-standing commitment to trusted digital infrastructure. Legislative developments in online safety, including the forthcoming Online Safety (Relief and Accountability) Bill, signal a gradual move towards more binding obligations for digital platforms in areas involving systemic societal risk.
Conclusion
Singapore’s digital and AI governance framework is defined by voluntary, principles-based guidance, sector-specific regulation, strong data protection foundations, international interoperability, and practical tools and incentives. For organisations, compliance in Singapore requires governance by design rather than box-ticking against a single statute. Organisations should align internal policies with IMDA’s frameworks and AI Verify, integrate PDPC guidance into data protection programmes, meet sectoral expectations from regulators such as MAS, and adopt professional guidance where applicable.
As AI technologies continue to advance, Singapore’s governance model offers flexibility, clarity, and a strong emphasis on trust—qualities that are likely to remain central to its regulatory philosophy in the years ahead.
