This article is an extract from GTDT Market Intelligence Artificial Intelligence 2023. Click here for the full guide.


 

1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

Currently, there is no AI-specific law in Canada. However, various provincial and federal laws already apply to different uses of AI. For example:

  • the Canada Consumer Product Safety Act and provincial consumer protection laws regulate misleading terms and conditions, misrepresentation and undue pressure in the provision of AI-related goods and services;
  • the Criminal Code includes prohibitions against hacking activities and malicious use of technology;
  • the Food and Drugs Act, the Motor Vehicle Safety Act and the Bank Act all regulate and provide guidance on safety and security obligations for organisations operating in their respective sectors;
  • the Canada Human Rights Act and provincial human rights laws can provide areas for redress in cases of discrimination, including discrimination that occurs in conjunction with automated decision-making;
  • sale of goods laws, product liability laws, contractual liability and tort law impose various obligations on designers, manufacturers, retailers, and users of AI systems; and finally
  • private-sector privacy compliance is regulated by the federal Personal Information Protection and Electronic Documents Act (PIPEDA), which provides a framework guiding the collection, use and disclosure of personal information. Provinces with substantially similar private-sector legislation (eg, Alberta and British Columbia) provide similar rules, with Quebec mandating new compliance obligations (eg, notification, correction, complaints) regarding the use of automated decision-making systems that rely on personal information effective as of 22 September 2023.

With respect to other jurisdictions, Canada’s regulatory approach to AI appears to lie between the European Union’s proactive regulatory stance, epitomised by the forthcoming EU AI Act, and the United States’ free market regulatory attitude.

In June 2022, Canada’s Minister of Innovation, Science and Industry tabled Bill C-27 The Digital Charter Implementation Act 2022 (Bill C-27), which is set to replace the outdated federal private-sector privacy law (PIPEDA), with the Consumer Privacy Protection Act (CPPA). Furthermore, Bill C-27 would introduce the Artificial Intelligence and Data Act, Canada’s first AI-specific law.

The CPPA is designed to improve Canadian privacy law’s interoperability with the EU’s GDPR and privacy law globally, placing disclosure and transparency requirements on organisations that use any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them, in addition to various other new privacy obligations. Finally, Bill C-27, if passed, would also introduce the Artificial Intelligence and Data Act (AIDA), Canada’s first AI-specific law, representing a novel step towards the direct regulation of artificial intelligence systems in the country.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

Canada was one of the first countries to adopt a national AI strategy in 2017. Since then, the government has initiated different strategies to advance the adoption of AI across different Canadian sectors and industries, including the Pan-Canadian Artificial Intelligence Strategy, the Global Partnership on Artificial Intelligence, and the Advisory Council on Artificial Intelligence.

Although there are no national efforts to create data sharing arrangements specifically in the context of AI, as of April 2023, the federal government, through the Treasury Board Secretariat, has released public guidance to assist parties in completing an information sharing agreement (ISA) when personal information is shared among federal institutions. This includes providing a template for an ISA, as well as annexes, between federal institutions. The federal government notes that an ISA should set out the terms and conditions that will govern the sharing of personal information between the parties. The ISA should be specific and precise, written in plain language to ensure that all terms are fully understood, provide some flexibility to allow for limited amendments, and be drafted with guidance from program and project officials, privacy policy and information management experts, legal advisers and functional specialists, such as information technology system specialists and security experts. The government recommends that an ISA be developed when sharing any personal information. If sharing the personal information is required under federal law, then the institution must share in accordance with the requirements of that law. In this case, an ISA is recommended but not strictly required.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

Section 4.5 of the Guideline on Service and Digital provides guidance on the responsible and ethical use of automated decision systems. Automated decision-making is when technology is used to produce assessments about a particular individual or case meant either to directly aid a human in their decision-making or make a decision in lieu of a human. Specifically, section 4.4.2.4 of the Federal Policy on Service and Digital states that deputy heads are responsible for ensuring the responsible and ethical use of automated decision systems, in accordance with the Treasury Board of Canada Secretariat’s direction and guidance, including (1) ensuring decisions produced using these systems are efficient, accountable, and unbiased; and (2) ensuring transparency and disclosure regarding use of the systems and ongoing assessment and management of risks.

This policy requirement, which applies to automated decision-making systems developed or procured on or after 1 April 2020, and supporting directive requirements aim to reduce risks to Canadians and federal departments when using automated decision-making systems and ensure efficient, accurate, consistent and interpretable decisions that are made pursuant to Canadian law. Departments adopting automated decision-making systems are encouraged to take early action so that they can address implementation concerns of bias and lack of transparency at the outset. This proactive, consistent and responsible approach also minimises the government of Canada’s legal liability and public-facing risks.

Furthermore, section 6.2.3 of the federal government’s Directive on Automated Decision-Making (DADM) advises that being able to explain how decisions are made is critical. If generating this explanation to the client requires understanding how an artificial intelligence (AI) arrived at its result, it is important that the AI model itself be interpretable. Having an easily interpretable model can also greatly simplify testing and verifying of the system, including assessing bias. With recent impressive computational improvements, there are many techniques to achieve this. The DADM notes that it is important that the way an explanation is derived for decisions is considered when selecting and designing a machine-learning model.

The proposed Artificial Intelligence and Data Act (AIDA) is the first legal framework in Canada to address adverse impacts that occur due to systemic bias in high-impact AI systems in a commercial context. AIDA also seeks to address harms to individuals that may arise from high-impact AI systems (ie, physical harm, psychological harm, and economic loss). Through AIDA, businesses will be required to identify and address bias and risks of harm from their AI system.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

The DADM requires that owners of AI systems conduct risk assessments during the development of the automated decision system and establish appropriate safeguards, in accordance with the Policy on Government Security. The objectives of the Policy on Government Security are to effectively manage government security controls in support of the trusted delivery of government of Canada programs and services and in support of the protection of information, individuals and assets; and to provide assurance to Canadians, partners, oversight bodies and other stakeholders regarding security management in the government of Canada. Under the policy, there are specific responsibilities for deputy heads, which include identifying security and identity management requirements for all departmental programs and services, considering potential impacts on internal and external stakeholders, and ensuring that security incidents and other security events are assessed, investigated, documented, acted on and reported to the appropriate authority and to affected stakeholders, among others. There are also specific responsibilities for the Treasury Board of Canada Secretariat, which include establishing government-wide security policy governance to set strategic direction and priorities and coordinating security priorities, plans and activities government-wide and liaising with other lead security agencies on matters of national security and emergency management.

5 How are AI-related data protection and privacy issues being addressed? How will these issues affect data flows and data sharing arrangements?

The regulation of data, privacy and AI in Canada currently takes several forms, the principal being consent and disclosure obligations prescribed under the federal private-sector privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA).

PIPEDA requires organisations engaged in commercial activity to obtain the appropriate level of consent for specified purposes from data subjects upon the collection of their personal information. However, legislators have been keen to push stronger disclosure obligations for when AI systems are used to make automated decisions about data subjects using their personal information.

To this end, the strongest inroads have been made in Quebec, which has amended its private-sector privacy law and information technology laws to include several AI-related data protection and privacy obligations, two of the most relevant being:

  • organisations that create or intend to create biometric information databases (weight, height, facial features, etc) must register such databases with the province’s privacy regulator within 60 days of creating the database. In addition, organisations must treat all biometric information as ‘sensitive’ personal information, which carries more onerous consent and disclosure requirements; and
  • as of 22 September 2023, private-sector organisations must comply with new accountability and transparency requirements governing the use of automated decision-making systems that rely on personal information (including notification, correction, and complaint processes).

In the future, the CPPA proposed under Bill C-27 would, if passed, represent the largest overhaul to the Canadian data privacy landscape since PIPEDA (which the CPPA would replace). If passed, the CPPA would obligate businesses to provide a general account of the business’s use of any automated decision-making system to make decisions, predictions or recommendations about individuals where the foregoing would have a significant impact on them.

Furthermore, AIDA would also institute regulations affecting the international and interprovincial trade and commerce in artificial intelligence systems, including through implementing new regulations affecting the large databases that inform and train AI algorithms. AIDA would establish requirements for the design, development and use of AI systems, including measures to mitigate risks of harm and biased output along the life cycle of the AI system. Organisations that use regulated AI systems would be expected to consider bias-related impacts of their systems, monitor their use, and make efforts to mitigate biased outputs. Finally, AIDA would implement a penalty regime for offences under the act. For example, a new criminal offence would be created for the intentional use or processing of unlawfully obtained personal information to design, develop, use or make available for use an AI system. Offences under the act can carry fines of up to C$10 million or 3 per cent of a company’s gross global revenues, whichever is greater.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

Most significantly, the federal government has issued a framework for the responsible use of AI for government entities, through the DADM. This directive applies to all automated decision systems developed or procured after 1 April 2020 and applies to all institutions subject to the Policy on Service and Digital, unless excluded by specific acts, regulations or Orders-in-Council. The objective of this directive is to ensure that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society, and leads to more efficient, accurate, consistent and interpretable decisions made pursuant to Canadian law. The government is committed to using artificial intelligence in a manner that is compatible with the core principles of administrative law such as transparency, accountability, legality and procedural fairness. The federal government also issued guidance to federal institutions on their use of generative AI tools on 6 September 2023.

From 7 May to 4 June 2021, the Ontario government sought input and ideas from Ontarians on how the government can develop an AI framework that is accountable, safe and rights-based. Following the consultation period, the Ontario government noted that these actions would form its AI Framework and they would continue to engage with Ontarians as they take the next steps towards trustworthy AI use in Ontario. On 17 January 2022, the government of Ontario released six beta principles for the ethical use of artificial intelligence and data-enhanced technologies in Ontario. In particular, the principles set out six objectives to align the use of data-enhanced technologies within the government processes, programs and services, with ethical considerations being prioritised. These principles included the fact that the use of AI must be transparent and explainable; good and fair; safe; accountable and responsible; human-centric; and sensible and appropriate.

7 Has your jurisdiction participated in any international frameworks for AI?

Canada is a member of various international organisations that are working towards creating frameworks for AI. First, Canada is a founding member of the Global Partnership of Artificial Intelligence (GPAI). This committee includes Australia, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States, and the European Union. The purpose of this committee is to guide the responsible development and use of AI. The committee has working groups focusing on responsible AI, data governance, innovation and commercialisation and the future of work.

Additionally, Canada is a member of the Organisation for Economic Co-operation and Development (OECD). The OECD has a variety of committees and working groups, including one focused on AI. The AI group focuses on recommendations for AI use around the world and ensuring that AI benefits society.

Furthermore, through the Standards Council of Canada, Canada is a member of the International Organization for Standardization (ISO). In 2017, the ISO and the International Electrotechnical Commission (IEC) created a joint technical committee on AI: ISO/IEC JTC 1/SC 42 (the Joint Committee on AI) that aims to provide guidance and develop a standardisation programme for AI. So far, the Joint Committee on AI has published 17 standards documents with 27 more in development. While many of these standards documents provide high level information on AI as opposed to specific guidelines, some of the concrete measures contemplated in the published standards and those in development include risk management tools such AI impact assessments. Of note, the proposed Artificial Intelligence and Data Act (AIDA) would also introduce impact assessment requirements if it becomes law.

Finally, as a member of the G7, Canada is taking part in the Hiroshima AI Process, which was announced in May 2023 during the G7 Summit in Japan. This group will be discussing AI governance, IP rights protections, transparency, responsible AI, and other issues stemming from AI.

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

Canada is focusing on developing frameworks and regulations for AI. Most notably, in August 2023, Canada released the ‘Canadian Guardrails for Generative AI – Code of Practice’. This code of practice is focused on the Canadian government providing guardrails and regulation of generative AI, with a focus on safety, fairness, transparency, human oversight and monitoring, accountability, and validity.

In addition to the above, there are numerous developments throughout Canada in a variety of areas related to AI. One of the most noteworthy developments is in the privacy space, where the proposed Artificial Intelligence and Data Act is being contemplated along with several regulations such as Bill C-27, which would require businesses to provide a general account of the business’s use of any automated decision-making systems, such as AI models, that make predictions, recommendations or decisions about an individual. Additionally, with respect to biometric data, in Quebec, Law 25 was introduced (the Quebec IT Law), which requires companies to disclose a database consisting of sensitive information (including biometric data) to Quebec’s privacy regulators. Certain requirements and obligations are imposed on the company to report the database.

Another major development in Canada is the increase of, and concern regarding, generative AI. The explosion of generative AI has led to concerns regarding privacy, training materials, bias, infringement and ownership. Currently, the Canadian courts and intellectual property offices are facing applications and decisions where IP is created by an AI model, or decisions are made by AI models. Canadian courts have yet to issue any rulings with respect to ownership of AI-generated content, although the prevailing wisdom in Canada is that, absent legislative amendments, intellectual property rights will not arise vis-à-vis AI-generated works or inventions, since both copyright and patent protection are predicated on meaningful human contributions. While Canadian authorities are considering whether amendments are appropriate to Canadian copyright legislation to create a new fair dealing exception to permit training AI models on copyrighted content, there is currently no exception that would clearly render such activities non-infringing. Lastly, on the privacy front, the Office of the Privacy Commissioner of Canada has launched an investigation into Open AI’s data collection and usage. This investigation is sure to provide insight into developing the models for AI, and ownership of AI-related data.

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?Are there any emerging industry or non-governmental standards governing the development and use of AI-related technologies?

Almost every industry in Canada has seen a surge in the introduction and deployment of AI-based products and software. Legislation and regulations are trying to keep up with the evolving industries. In the financial services sector, new AI products related to online lending, robo-advisers, fraud detection and market predictions are being developed. In the automotive sector, AI is being used to develop autonomous and self-driving cars. Additionally, in the healthcare sector, AI solutions ranging from patient care solutions, administrative processes, payment processes and other diagnosis and treatment solutions are being developed in Canada.

Some industries have begun to provide guidance and regulations to assist developers. For example, in the financial sector, the Office of the Superintendent of Financial Institutions (OSFI) has provided guidelines with respect to technology, cyber risk management and third-party risk management. These guidelines outline expectations related to security and the use of new technologies.

In the healthcare sector, Health Canada has collaborated with the US Food and Drug Administration, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency to identify guiding principles that can inform the development of Good Machine Learning Practice. These principles promote safe, effective and high-quality medical devices that use AI and machine learning. These guidelines cover all aspects of the technology including clinical trials, training datasets, developing models, monitoring of models, and information provided to users.

Other groups have also assisted with developing guidelines for various sectors and industries, such as the CSA Group, who have released risks associated with children’s privacy in the age of artificial intelligence.

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

In June 2022, the Digital Charter Implementation Act, 2022 (Bill C-27) was introduced. Bill C-27 is designed to overhaul the federal private-sector privacy legislation, PIPEDA, and modernise the framework for the protection of personal information in the private sector. Bill C-27 is undergoing legislative review in Parliament and, if passed, would introduce the following legislative updates:

  • The new Consumer Privacy Protection Act would require organisations to be open and transparent about the use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them.
  • The Artificial Intelligence and Data Act (AIDA) would introduce new measures to regulate international and interprovincial trade and commerce in artificial intelligence systems. This law is designed to protect individuals and communities from the adverse impacts associated with ‘high impact’ AI systems, which is to be defined in future regulation. AIDA would establish common requirements for the design, development and use of AI systems, including measures to mitigate risks of harm and biased output. AIDA would also prohibit specific practices with data and artificial intelligence systems that may result in serious harm to individuals or their interests. AIDA provides for severe penalties for civil and criminal offences under the act.
  • In March 2023, the federal government introduced the AIDA Companion Document, which provides insight on the implementation timeline and factors under consideration for drafting AIDA regulations, which, among other things, will define ‘high impact’ AI systems under the act.

If passed, the provisions of AIDA would come into force no sooner than 2025.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI-related technologies, including those developed by third parties?

Among some of the areas for companies to focus on include licence rights, privacy, bias and inaccuracies associated with AI-related technologies. With respect to licensing, each AI model and software will have different rights with respect to ownership and who has the rights to use and own the inputs, training materials and outputs from AI-related technologies. Companies must determine whether they have the proper rights to all the data being inputted, or the content that is being outputted from the AI model.

Like the licensing clarifications, a company may use data that is subject to Canadian privacy law, such as ‘personal information’. Using this type of information would require that the company obtains informed consent to use the data in an AI software or technology. Companies should ensure that they are allowed to use the data they collect in AI models.

Bias concerns related to AI should be considered before using any AI technology. Consideration must be given to how the AI was trained, and how the data for the AI model was prepared. If there is bias in the training materials, the output may not provide relevant results for the company.

One of the recent concerns from AI technologies are inaccuracies or ‘hallucinations’. Generative AI programs consisting of large language models such as ChatGPT are prone to hallucinations, where the program will produce a seemingly correct answer to a question that actually has no grounding in reality. Inaccurate outputs might lead to a number of legal liabilities, such as under defamation law, consumer product liability law, tort law, etc. Companies must ensure that they use AI technology as a tool, and only rely on results after they feel confident and comfortable in the system.


The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

At Baker McKenzie, we have been at the forefront of advising clients on their development and use of new and emerging technologies, such as AI. With our global multi-disciplinary legal and industry expertise, we effectively guide clients through complex and evolving legal and regulatory landscapes as well as prepare them to deal with emerging legal issues. We offer collaborative and practical solutions to ensure that our clients drive and pursue technological innovation while remaining legally compliant and in line with industry standards.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

Developments in AI technology are both disruptive and transformative, bringing with it exciting opportunities for companies to improve their internal operations, further their innovation strategies, and revolutionise their product services and offerings. AI technology advancements are impacting every industry, whether it is the use of fraud detection tools in financial services, disease identification and detection tools in healthcare, or self-driving vehicles in the automotive sector. The most notable AI development in 2023 is generative AI. AI tools that were primarily used to identify and analyse complex sets of data to augment decision-making are now capable of generating new text, images, and other media. We are excited by the potential of new AI tools that will revolutionise the way in which business is done and look forward to seeing how regulatory and industry standards will evolve in order to govern such advancements.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

The successful implementation of AI requires consideration of organisational, technical, ethical and regulatory issues. There is a myriad of emerging legal issues related to AI regulation and liability, such as employment, intellectual property, data privacy and governance, reliability and quality of AI outputs, and bias within algorithms. AI is driven by data. One of the greatest challenges facing developers and deployers is the potential for harm based on inherent bias and discrimination arising from the data that is used to train the AI model. Developers and deployers investing in and leveraging AI tools from external parties will need to consider the underlying data sources and any associated risks. In generative AI, there is also the potential for hallucinations to occur, in which an AI model generates incorrect information that is presented and relied upon as factual information. We have provided holistic legal and practical guidance to clients so that they have appropriate guardrails and risk identification and mitigation strategies in place to ensure that their use of AI is ethical and legal.