Anne-Marie Bohan is the head of Matheson’s technology and innovation group. She has over 25 years’ experience in technology related legal matters and has acted in some of the largest value and most complex IT and telecommunications systems and services outsourcing contracts, including advising on a number of the largest and highest value financial services outsourcings in Ireland. Anne-Marie’s practice includes advising a broad range of clients on data protection, privacy issues and cybersecurity issues. Anne-Marie has lectured on IT, data protection and financial services in the Law Society of Ireland, the National University of Ireland Maynooth, and more broadly.

Rory O’Keeffe is a partner in Matheson’s technology and innovation group. Rory has extensive experience on a broad range of international and domestic technology and business transformation deals. Based in London, he brings together significant in-house and practical experience in advising on technology and commercial legal issues, with a particular specialism in cloud, AI, robotics, IoT, cybersecurity and complex technology contracting. Prior to joining Matheson, Rory worked as Senior Legal Counsel in a Fortune Global 500 company. He spent over 10 years in London advising on complex, high value, fast-paced, multi-jurisdictional deals. Rory is also committee member of the Society of Computers and Law, specifically supporting the Inclusion and Diversity group. He regularly presents on topics, most recently on cybersecurity, blockchain, NFTs and AI.


1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

The current state of law and regulation governing artificial intelligence (AI) in Ireland is similar to the other EU countries. At present, there are currently no rules or regulations that apply specifically to AI in Ireland. However, Ireland has published a National AI Strategy titled AI-Here for Good (see further comments below). In addition, at European-level, there are a range of laws and regulations that regulate AI. The regulations are as follows:

  • The General Data Protection Regulation (GDPR) – AI is not explicitly discussed in the GDPR; however, many of the provisions in the GDPR apply to the processing of personal data in an AI context.
  • Data Protection Act 2018 – the DPA is the principal national data protection legislation in Ireland. The Act supplements the GDPR in Irish law.
  • The Platform-to-Business Regulation – this regulation applies to online search engines providers and online intermediation service providers.
  • European Union (Copyright and Related Rights in the Digital Single Market) Regulations 2021, amending the Copyright and Related Rights Act 2000 – this Act affords limited protection to ownership of content created by an AI system. Under section 30 of the Act, protection is specifically afforded to computer-generated work for 70 years from the date it is first made available to the public.

On 21 April 2021, the European Commission submitted its proposal for the first-ever Artificial Intelligence Regulation (AI Act). The regulation represents the first attempt at an EU level to regulate AI horizontally. The aim of the AI Act is to establish a standard for the harmonisation of AI that would lay down rules on the use and governance of AI systems, the risks associated with AI and the development of AI. The AI Act aims to establish the European Union as a trustworthy central hub for the ethical use of AI on a global scale by adopting a risk-based approach. It proposes addressing the legal and commercial risks generated by using AI. The AI Act divides AI systems into three categories: unacceptable-risk AI systems, high-risk AI systems and limited and minimal-risk AI systems, and places different obligations on providers depending on the AI systems and their level of risk.

On 28 September 2022, the European Commission proposed updated liability rules on products and new liability rules on artificial intelligence. The updated Product Liability Directive and new AI Liability Directive are to complement the AI Act.

The proposed AI Act and these directives will go through the European legislative process where the European Parliament and the Council of the European Union will have the ability to propose amendments to the European Commission’s proposals.

There is not much AI-specific legislation in other jurisdictions. For example, like the EU, the United Kingdom is yet to adopt any AI-specific legislation. The UK government is currently reforming UK data protection laws (most recently under the UK Data Protection and Digital Information Bill). In July 2022, the UK government published an AI Action Plan, following on from the UK National AI Strategy (September 2021). By comparison with the AI Act, the UK appears to be taking a decentralised approach to AI regulation.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

On 8 July 2021, the Irish government released Ireland’s first National AI Strategy titled AI – Here for Good. The National Strategy sets out how Ireland can be an international leader in using AI to benefit our society. The strategy focuses on educating people on the potential of AI and creating an ecosystem that promotes trustworthy AI. The National Strategy proposes seven strands of actions that aim to: (1) build public trust in AI and leverage AI for economic and societal benefit; (2) foster a desirable regulatory environment; (3) foster public sector leadership in the adoption of AI; and (4) increase productivity by enabling AI technology adoption by Irish enterprises.

At present, the legislative framework underpinning the Irish government’s strategy includes, the Data Sharing and Governance Act 2019 (DSG Act) and the Public Service Data Strategy (2019–2023), which provide guidance to companies sharing data in the public sector and places certain obligations on public sector bodies to improve their data management and data sharing processes. The Irish Department of Public Expenditure and Reform established the Data Governance Board on 22 December 2021 whose function is to oversee data sharing arrangements under the DSG Act.

The Irish Data Protection Commission (DPC) has recommended that all data sharing arrangements in the public sector should generally:

  • have a clear basis in primary legislation or alternatively, in secondary legislation (provided a primary legislative basis exists) thereby ensuring there is no room for confusion in relation to the nature of the arrangement;
  • have a clear justification for each data sharing activity;
  • inform individuals in relation to the sharing of their data and the purpose for which it is shared;
  • contain how the sharing of the data will impact the individual concerned; and
  • inform individuals on the retention period and the disposal process of the shared data.

The DPC welcomed the decision of the Court of Justice of the European Union (CJEU) in Bara & Others (C-201/2014), which placed a strong focus on public sharing arrangements. On the basis of the decision, the DPC has reiterated the importance of keeping data subjects informed on how their personal data is being processed (this includes the sharing of the personal data).

The final provisions of the DSG Act came into force on 31 March 2022, which meant section 38 of the DPA 2018, which supplements article 6 of the GDPR, could no longer be relied on as a valid legal basis for data sharing arrangements between public bodies.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

The Irish National AI Strategy (discussed in question 2) aims to address the ethical and human rights issues raised by the deployment of AI. The strategy aims to serve as a roadmap to more ethical and trustworthy development of AI in Ireland. The government is focused on promoting an ethical and trustworthy approach in driving the adoption of AI in both the private and public sector. One of the ethical issues raised by the deployment of AI is in relation to the human consequences of developing AI-based systems that could impact the availability of jobs and change livelihoods.

The AI Act also aims to address fundamental human right issues and ethical issues raised by the AI deployment. The AI Act follows a risk based approach and seeks to address the risks caused by the use of AI through a structured set of rules. The AI Act provides four types of AI systems depending on level of risks involved.

Prohibited or unacceptable risk AI systems

There are some AI systems that are considered an unacceptable risk to individuals and as a result are prohibited. For instance, the AI Act explicitly prohibits subliminal, manipulative or exploitative AI systems that are likely to cause physical or psychological harm. It prohibits practices that manipulate individuals through subliminal techniques beyond their consciousness or practices that seek to exploit vulnerable persons such as persons with disabilities or children in order to distort their behaviour in a manner likely to cause harm to them or others and AI that evaluates a persons’ level of trustworthiness based on their social behaviour or personal traits. The AI Act also prohibits the use of AI systems by public authorities for ‘AI-based social scoring’. Furthermore, subject to very limited exceptions, the placing into the market or the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited as it is considered an unacceptable intrusion on a person’s rights and freedoms.

High risk AI systems

Some AI systems are considered high-risk, and specific rules apply to AI systems that create a high risk to the health and safety of individuals. High riss AI systems are permitted on the European market subject to compliance with certain mandatory requirements. There are two main categories of high-risk AI systems, namely:

  • AI systems that are intended to be used as safety components for certain regulated products (eg, motor vehicles); and
  • AI systems used in certain specific contexts and for specific purposes (eg, remote biometric identification in education).

High risk AI systems include AI technology used in critical infrastructure that could put the life and health of citizens at risk and also in the administration of justice and the democratic processes. All remote biometric identification systems are considered high risk and are subject to strict requirements.

Limited or no risk systems

The majority of AI systems used in the EU will fall under this category. This includes AI systems such as the operating of a chatbot and powered inventory management.

The National AI Strategy recognises and supports the European Commission’s ‘Ethics Guidelines for Trustworthy AI’, and ‘Policy and Investment recommendations for trustworthy Artificial Intelligence’ (2019). The Ethics Guidelines set out what a trustworthy AI should look like. According to the Ethics Guidelines, a trustworthy AI should be lawful, complying with all applicable laws and regulations; ethical; and robust, both from a technical and social perspective.

Ireland will actively continue to play an important role in discussions at an EU level in relation to managing ethical and fundamental human rights, while creating a safe space for the innovation of AI.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

The Irish government published its National Cyber Security Strategy (2019–2024), with a vision of allowing Ireland to continue to safely enjoy the benefits of the digital revolution and to play a full part in shaping the future of the internet. The Irish government, through its National Security Analysis Centre, is considering potential threats that AI technologies could pose to Ireland’s security as part of its ongoing work on the development of a new National Security Strategy. At the time of writing, there is no confirmed date when this new strategy will be finalised and published.

As mentioned earlier, Ireland is awaiting the enactment of the AI Act, which will impose AI risk assessment categorisations that will have implications for export and import of AI-based products into Ireland and the EU.

Ireland applies the various United Nations and EU measures adopted concerning trade (including trade sanctions). Irish laws also cover the control of exports, transfer, brokering and transit of dual-use items, including a licensing requirement in respect of brokering activities involving persons and entities negotiating or arranging transactions that may involve the transfer of items or technology listed on the EU Common Military List.

5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

In Ireland, the GDPR applies to all processing of personal data. This covers companies using AI systems to process personal data; these companies must comply with the GDPR. The GDPR imposes an obligation on companies to be transparent in their processing, protect the personal data within their possession, and provide data subjects with certain legal rights in relation to their personal data. The GDPR imposes different rules depending on whether the individual or company is acting as a data controller or the data processor. A data controller must demonstrate fairness, lawfulness, transparency, integrity, storage and full confidentiality of personal data. The controller must oversee how the data is processed, controlled, and must supervise the data processor in how they handle the personal data.

According to article 22 of the GDPR, the data subject has the right not to be subject to a decision based solely on automated processing, including profiling, unless the processing is based on the individual’s explicit consent.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

As of yet, there is no AI-specific legislation to be enforced and monitored in Ireland. However, to the extent existing laws apply to AI, existing government agencies have been exercising their powers. For example, the DPC issued guidance in December 2021 of the use of AI and children’s data, ‘Children Front and Centre: Fundamentals for Child-Orientated Approach to Data Processing’.

7 Has your jurisdiction participated in any international frameworks for AI?

Yes. Ireland, through the National Standards Authority of Ireland (NSAI), participates in the International Standards Organisation (ISO), which is undertaking standardisation work relating to AI. The NSAI hosted an International Plenary meeting to develop ISO standards for AI and understand the use, application and ethical concerns relating to AI. A key aim of the meeting related to formulating standard policies in the area of AI standards, AI trustworthiness and Big Data.

Ireland has signed, for example, a declaration of cooperation on AI with other European countries, with member states agreeing to work together on the most important issues raised by AI, from ensuring Europe’s competitiveness in the research and deployment of AI, to dealing with social, economic, ethical and legal questions. Ireland’s policy development is underpinned by engagement in relevant international AI policy and governance processes at the EU, the United Nations, the Organisation for Economic Co-operation and Development and the Global Partnership on Artificial Intelligence.

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

In terms of regulation of AI over the past year, the most noteworthy AI-related developments are the proposed AI Liability Directive and guidance from the DPC regarding the use of AI and children’s data.

Generally, it is evident that cybercrime now equally poses as a significant a threat to our society as the typical criminal activity that occurs in our physical lives. Over the recent past, the dependency on technology has increased exponentially. More people in Ireland have adapted to a hybrid-working model and as a result of this there is a greater risk of cyberattacks. Through AI algorithms and data analysis, it is now possible to prevent cyberattacks more readily and successfully than ever before. More businesses are believed to be depending on AI to strengthen their cybersecurity defences.

As part of those defences, the need for operational resilience has been raised by many experts, especially in light of new risk management and incident reporting obligations falling out of the European AI Strategy (2018), including the proposed EU Network and Information Security Directive (NIS-2 Directive), the EU Digital Operational Resilience Act (DORA) and the EU Cyber Resilience Act. Each of these developments would need to be read with the AI Act and the conformity assessment requirements set out there.

In May 2022, Ireland appointed its first AI Ambassador to lead the national conversation on the role of AI in the lives of the Irish population, with an emphasis on an ethical approach.

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

AI is an area that is rapidly growing in Ireland. The National AI Strategy quotes that the use of AI for AI-based products and services will boost Ireland’s GDP by 11.6 per cent or €48 billion in 2030. We have seen major developments in how AI has redefined many industries in the Irish jurisdiction. With the recent technological advancements, the manufacturing industry is said to be the fastest growing in the context of AI in Ireland.

Ireland is recognised has having world-class centres of excellence in manufacturing sectors, including biopharma, medtech, technology, engineering and food, and in financial services. Balanced with the ability to deploy AI at scale within these sectors, it is expected that these sectors will continue to see the most development in AI-based products and services.

By way of examples, Irish AI-healthcare solutions have seen the creation of tools to tackle health issues such as chronic diseases and the creation of virtual reality tools that assist in the administration of healthcare in general.

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

Yes. As mentioned in question 1, pending legislation is the AI Act and AI Liability Directive. At the time of writing, there is no exact date when these will be enacted.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?

Given the various proposed EU laws and new risk management requirements, companies should begin now to familiarise themselves with those laws that will impact their business. Adherence to the privacy-by-design, privacy-by-default principles enshrined in the GDPR, ‘Responsible AI’ guidance, security-by-design and industry best practices will each assist with assessing and managing risks arising in the deployment of AI.

These regulatory developments are expected to require enterprises to embed operational and digital resilience into their systems, products and practices; to educate and train their employees effectively on the procurement, use and ongoing monitoring of AI systems (eg, identifying bias in datasets); and ensure adequate planning, testing and retesting of AI systems throughout their life cycle.


The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

Matheson has been very fortunate in being a first-adopter of AI technology in the operation of itself and in delivery of services to its clients. As tech lawyers, it is important to have a growth mindset (like our clients do) and learn everything you can from our clients. AI or data laws, AI products and services are ever evolving. The legal queries continue to be challenging in the best way possible. We have learned to look around those digital corners for clients. As in the tech world, AI lawyers need the skill to understand and accept that change is a constant. After all, it was only last year that the hype around the metaverse really took shape, and the rush to set up the first outposts there is very real.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

Emerging technology and services drive more exciting, complex questions. The greatest opportunities will exist within the digital economy, including the active, fast-paced innovation across all industries.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

The greatest challenge is predicting how legislators and regulators will react to new AI products and markets. Connected to that is the pressure for clients to keep up with all the changes. Clients may take some comfort from existing laws and regulatory guidance to help bolster their predictions. For society, the challenges are around awareness of how the AI product works, and the legal and ethical issues