The use of artificial intelligence technology by the public sector primarily stems from the need to improve public services, increase government efficiency, and address social challenges.  All around the world, we can see how governments are leveraging artificial intelligence to better cater to the needs of their citizens by automating mundane administrative work. While it is agreed that leveraging artificial intelligence by governments is a pressing need in today's digital world for faster economic growth, it is also essential to consider the legal implications of deploying artificial intelligence in the public sector.

In this 7th edition of the Curio, as the stakes are higher in the public sector, we are analysing how the advantages of using artificial intelligence need to be ringfenced with legal oversight. 

 How do Governments Harness AI’s Potential Globally?

The OECD analysed 200 use cases across 11 government functions and found that governments are mostly using AI in public-facing service delivery and internal operations.[1] The use of AI by government authorities improves the efficiency of their functions.

For example, after the Indian government deployed the ‘Prediction of Adverse TB Outcomes AI Solution’, an AI tool that helps predict TB patients who are more likely to have an adverse outcome, there was a 27% decline in adverse outcomes.[2]

Governments' use of AI also helps them be more responsive to the public by being able to provide more personalised services to them. Singapore’s tax authority’s multilingual chatbot provides personalised responses to individuals and has reduced the call centre inquiries by 50%.

AI is being used not just to predict human behaviour but also to make predictions regarding natural disasters, thereby reducing the adverse effects. For example, in Alberta, Canada, AI is used to predict the likelihood of wildfire occurrences across protected forests.[3]

Security forces across the globe have also started deploying AI to enhance their military capabilities. For example, to improve border security and reduce terrorist infiltration and drug influxes, India has deployed an AI-based Intrusion Detection System, which detects human movement along sensitive borders.[4] The US Navy’s Aegis combat management system can simultaneously track more than 100 targets and guide multiple missiles to intercept them.[5] 

AI is also being used by the judiciary to deal with the massive number of cases it handles. For example, Germany has accelerated case processing by using an AI tool for metadata extraction.[6]

In this backdrop, we can understand that the use of artificial intelligence has helped in increased efficiency and productivity, better decision-making and forecasting, in turn increasing a country's competitiveness and economic growth. Despite these advantages, adoption of AI does not come risk-free. To unlock the benefits of AI, it is important to first understand the risks involved and then mitigate them.

Legal Implications in the Indian Context:

Part IV (Articles 36 to 51) of the Indian Constitution outlines the Directive Principles of State Policy (DPSP). Though non-justiciable, these are principles that are fundamental in the governance of the country. Article 38 specifies that the State should secure the welfare of the people and strive to minimise income inequalities and eliminate other social inequalities. Article 39 emphasises that the State should ensure that all citizens, men and women equally, have a right to livelihood, and there is equal pay for equal work for both men and women. The DPSP has been the foundation for various social security legislations in the country.

While the responsible use of AI can be instrumental in achieving these goals, it is also important to consider the impact that a rapidly advancing technology can have on society. Various factors affect the operation of an AI, right from its planning and testing stage to its deployment and use in the live environment. Insufficient exposure to diverse training data at the development and deployment stage, overfitting, noises and outliers in the training data can impact the output provided by AI. This could result in biased and discriminatory outcomes. Biased and discriminatory outcomes are on their own detrimental to society, more so when they are the outcome of an AI system deployed for a government function. In India, any discrimination by the government would be a violation of the fundamental rights regarding equality under Articles 14, 15, and 16 of the Indian Constitution, as examined in detail in the last edition of the Curio Artificial Intelligence: Bias, Discrimination and the Law

Another key concern when AI is used in the public sector is the lack of transparency and explainability of the decisions made by the government. When the AI replaces the human decision-making processes, such as identifying a criminal through AI facial recognition technology could lead to an opaque system where the decision-making processes would not be understandable or visible to the people, which can lead to distrust in the government processes. Proper governance and transparency measures, through which the general public is able to understand the decisions made, need to be implemented to tackle this concern.

Government’s use of AI also sparks privacy concerns, as governments have access to huge databases of personal data. Using AI for processing such data makes them susceptible to cyber threats, increasing the chances of data breaches. The use of AI enables mass surveillance, which raises fears of pervasive state intrusion into the personal lives of people, which could also result in violation of citizens' right to privacy under Art.21 of the Indian Constitution, as recognized by the Supreme Court in the case of K.S. Puttaswamy v. Union of India[7]. While the Digital Personal Data Protection Act, 2023 (“DPDPA”) exempts processing by the government for a few specific purposes, such as processing in the interest of national security, public interest, or to prevent, detect, or investigate a crime, the DPDPA still requires the government authorities to have reasonable safeguards to protect such data. Therefore, the government needs to ensure that necessary guardrails are in place to ensure that only minimum data is processed for a specific purpose and such data is protected from any unauthorised use or disclosure.

Global Perspective: How the World Tackles AI?

With the growing use and the need to use AI in the public sector, governments around the world are trying to regulate and find the right approach for AI.

The European Union is one of the global pioneers in introducing an AI regulation to govern the use of AI both in the private and the public sector. The European Union’s AI Act classifies AI systems based on the risk level and imposes obligations on those deploying high-risk AI systems. High-risk AI systems are those that may cause significant harm to the health, safety, and fundamental rights of individuals.

While the federal administration of the United States of America has pivoted towards an innovation-first approach and a liberal framework towards the use of AI, the various states of America have their own executive orders and regulations to govern the use of AI and mitigate the impact of the use of AI in the private and public sectors. These executive orders address whether and how AI should be used by the state government while acknowledging the potential harm that AI could cause from such use. These executive orders emphasize balancing the benefits of AI and the civil rights and safety of individuals.

The State of California has issued GenAI Guidelines for Public Sector Procurement, Uses and Training, 2024[8]. While these guidelines encourage the deployment of GenAI in the public sector it also require the public authorities to properly understand the use case and to assess the risks and potential impacts of deploying the GenAI. The guidelines also encourage the public authorities to engage in extensive testing of the AI models before deploying, along with adequate staff training on the use of AI.

Unlike the European Union, the United Kingdom, as of now, does not have a dedicated legislation to govern AI; rather, the governance of AI is dependent on sector-specific regulations and guidance. India also has a similar sector-specific approach. Though we do not have a dedicated AI legislation, the RBI, SEBI, and ICMR have released guidelines on the use of AI in their respective sector, and the MeitY has also released the India AI Governance Guidelines, 2025, stipulating a few considerations before deploying and using AI, especially where the use could pose a high risk or significant threat to the individuals.

At the multilateral level, the Council of Europe's AI Treaty has entered into force, and the United Nations has established dedicated AI governance bodies, signalling that the international community increasingly views public-sector AI governance as a matter requiring coordinated global attention.

Way Forward

While the use of AI by governments can phenomenally transform the way they operate, it is undeniable that the potential for adverse consequences is equally significant. At the same time, a failure to adopt AI will widen the gap between the capabilities of the public and private sectors, undermining the government's ability to meet the growing expectations of its citizens.

With the advancement of AI, the world is also witnessing a rapid evolution of the legal framework governing AI globally, and the pace of legal development is not the same as the advancement of the technology. The use of AI has certainly become unavoidable, and yet there is a lacuna of regulations and legal precedents to have a black-and-white rule book for the use of AI. In this evolving milieu, the way forward is not a binary choice between unchecked deployment and complete avoidance. Governments must instead pursue a calibrated approach expanding the scope of AI in the delivery of state functions while embedding robust institutional safeguards, irrespective of whether they are legally mandated or not, such as algorithmic impact assessments, human-in-the-loop mandates for high-stakes decisions, enforceable transparency obligations, and independent oversight mechanisms. This can ensure that a state harnesses AI's fullest potential without compromising the constitutional and legal rights of individuals it is bound to uphold.