Making decisions about individuals using computers and computer algorithms is now commonplace. There are now also increasing proposals to use privacy laws to regulate automated decision making. One of the first explicit attempts to regulate automated decision-making using privacy laws is the European Union General Data Protection Regulation (GDPR). More recently (and locally), both the Consumer Privacy Protection Act (CPPA), Canada’s proposed controversial new privacy law, and Bill 64, Quebec’s proposed privacy amendments, would enact new transparency and explainability obligations for automated decision making.

These proposed laws may help to ensure that AI uses personal information responsibly. But they also raise questions about what the new obligations will require in practice, the challenges organizations will have in trying to comply, whether the proposed changes go far enough or too far, and whether using privacy laws to regulate automated decision making is the appropriate mechanism for any such regulation. We address these questions in this blog post.

Background

The concept of automated decision making or automated decision systems vary in scope. These are generally premised on the uses of artificial intelligence systems (AI systems), a fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities.

AI systems can be generally understood as software that is developed with one or more of certain techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. Illustrations recently set out in a draft European Regulation for Harmonizing Rules on Artificial Intelligence are machine learning using methods such as deep learning, logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and statistical approaches, Bayesian estimation, search and optimization methods.

The scope of regulating automated decision making using privacy laws varies internationally. The UK Information Commissioner’s Office (ICO) defines “automated decision-making” as “the process of making a decision by automated means without any human involvement. These decisions can be based on factual data, as well as on digitally created profiles or inferred data.” A broader meaning is included in the Government of Canada’s Directive on Automated Decision-Making, from which the CPPA definition is derived, that defines “Automated Decision System” as including any technology that either assists or replaces the judgement of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.”

This latter definition is exceptionally broad. It is not limited by classes of technology that will make automated decisions, the products, services, or sectors of the economy that will use it, the nature of the automated decisions that the systems could implicate, the classes of individuals that could be affected by decisions, or the nature or significance of the impacts of the decisions to individuals. In short, under such a broad definition, “automated decisions” will be ubiquitous.

Nascent as AI technology still is, automated decision making is already pervasive. It is used by private sector entities to screen job applicants and to flag violent content or “fake news” on social media.[1] It powers autonomous and semi-autonomous vehicles, and includes “self-parking” and collision avoidance systems. In the Australian province of New South Wales, authorities use a human-in-the-loop AI system to detect drivers who text behind the wheel. According to a 2018 report published by the International Human Rights Program and Citizen Lab, both at the University of Toronto:

A call with a senior … data analyst [at Immigration, Refugees and Citizenship Canada (“IRCC”)] in June 2018 confirmed that IRCC [was] already using some form of automated system to “triage” certain [immigration] applications into two streams, with “simple” cases being processed and “complex” cases being flagged for review. The data analyst also confirmed that IRCC had experimented with a pilot program to use automated systems in the Express Entry application stream, which has since been discontinued.[2]

Canada’s federal government put out a Request for Information in 2018 seeking “[i]nformation … in relation to whether use of any AI/[machine learning] powered solutions could be expanded, in future, to users such as front-end decision makers” at both IRCC (which processes immigration claims) and Economic and Social Development Canada (which processes benefits claims). In British Columbia, the Worker’s Compensation Board uses an automated decision board for claim intake and (in a small number of cases) adjudication.[3] In Estonia, meanwhile, the government is working towards deploying an automatic decision system to adjudicate small claims civil disputes.

Automated decision systems also make judgment calls that do not take the form of formal decisions. Autonomous vehicle systems that decide when to accelerate, decelerate, stop, and turn are one example. So are algorithms that decide which advertisements (or movies or news stories or social media posts) to display to an individual user, based on the user’s demographic data and past behaviour.

More concerning to many is the use of automated decisions for AI-powered surveillance, profiling, and behavior control and manipulation which also have pervasive potential including in the fields of employment and work, health, social media, location and movements, among others. It is quite likely that, by the time you read this blog post, you will have interacted with at least one automated decision system as part of your day.

Of course, many AI-powered automated decisions offer tremendous advantages to organizations that make them available to users and do not raise the Orwellian concerns often associated with AI. Services that offer consumers products they would want or that are suitable to their interests (as opposed to a wash of irrelevant ads or product offerings) such as recommendations for books or movies by Amazon or Netflix, or new financial services products, are examples. Speech recognition software that facilitates dictating emails or text messages by learning individuals’ speaking intonations and patterns, are a godsend to busy parents and professionals – and older smartphone users whose thumb dexterity will never match those of their kids. So are autocorrect and suggestion features in text and word processing software (despite the sometimes hysterically funny glitches that are already memes online). Many people also increasingly rely on virtual assistants like Siri, Alexa, Cortana and Google Assistant, the features and functions of which are constantly expanding. We like it when search engines return relevant and contextual results, although it is somewhat eerie when after doing a search on Google on a topic one starts getting YouTube recommendations for videos on related topics. AI has also helped us through the COVID 19 pandemic.

All of which is to say that, while there are concerns about the uses of AI and automated decision making, not all uses of automated tools for making decisions call out for regulation. The diversity of applications raises questions about the appropriateness of a one-size-fits-all approach to regulation.

Regulating automated decision making under privacy laws

Automated decision making is an easy target for regulators. Decisions made by automated systems affect individuals’ lives and livelihoods and reputations. This can sit uncomfortably with notions of fairness and justice. This unease is compounded by stories that depict automated decision systems misfiring, discriminating against individuals, and making biased decisions based on inadequate data sets or poorly trained or monitored algorithms, as well as by perceived threats of mass surveillance, profiling and behaviour manipulation.[4]

GDPR’s regulation of automated decision making under privacy laws

The European Union is a leader in its approach to regulating automated decision making under privacy laws under the GDPR. [5]

Two articles of GDPR apply expressly to automated decision-making. Under Article 22 of GDPR, there is a specific prohibition to protect individuals against carrying out solely automated decision making that has legal or similarly significant effects on them.[6] This Article reads as follows:

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;

(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

(c) is based on the data subject’s explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

Articles 14(1)(2)(g) and 15(1)(h) of the GDPR provide data subject with express rights to transparency and explainability of automated decisions:

“The data subject shall have the right to obtain form the controller confirmation [of] … the following information: (h) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

It is notable that the transparency and explainability obligations do not apply to all decisions, predictions, or recommendations made using, or partially relying on, AI systems. Rather, the obligations are limited to only those decisions referred to in Article 22, namely those decisions based solely on automated processing, including profiling, which produces legal effects concerning an individual.

Such express regulation of automated-decision making under GDPR must also be read with its generally applicable provisions in relation to data minimization, data accuracy, notification obligations regarding rectification or erasure of personal data, and restrictions on processing.[7] The GDPR also has requirements in certain circumstances for data protection impact assessments of large scale profiling.[8]

What the rest of the world is doing for regulation of automated decision making under privacy laws

Despite all of the concerns about automated decision making and privacy, most countries internationally have not yet enacted specific obligations under their privacy laws to deal with such technologies or processes, although recent trends suggest that approaches to the regulation of automated decision-making is evolving rapidly.

The U.K. House of Lords Select Committee on Artificial Intelligence concluded in 2018 that AI technology should not be deployed in automated decision systems unless and until it is capable of explaining its own decision making, even if this meant delaying the deployment of cutting-edge AI systems. It acknowledged that both E.U. and U.K. legislation endorsed explainability as the standard, and called on various expert bodies to “produce guidance on the requirement for AI systems to be intelligible”.[9] The UK subsequently adopted the GDPR, whose provisions reflect this perspective. The EU also recently released a draft regulation for Harmonizing Rules on Artificial Intelligence. (For a summary of this regulation, see, EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI.)

Apart from the GDPR, there are relatively few international benchmarks for the specific legal regulation of automated decision-making under privacy laws.[10] The jurisdictions that have specifically targeted such automated decision making, other than the jurisdictions in which the GDPR applies, are Brazil and California.

In Brazil, the General Data Protection Law (Lei Geral de Proteção de Dados, or the “LGPD”), which is modelled on the GDPR, allows a data subject to request a review of any decision made solely by an automated decision system. Like the GDPR and the proposed CPPA, the LGPD entitles a data subject to an explanation of the criteria and procedures used in the automated decision (Article 20).

The California Consumer Privacy Act of 2018 does not address automated decision making. However, on November 3, 2020, Californians approved Proposition 24, a ballot measure that enacts the California Privacy Rights Act of 2020. Once it comes into force in 2023, the California law will require the state’s Attorney General to promulgate regulations that govern individuals’ opting out of automated decision making, and that require businesses to provide “meaningful information about the logic involved in [automated] decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer”.

By contrast, recent privacy law reforms in Australia[11] and New Zealand[12] declined to address automated decision making specifically. Legislation that would have regulated automated decision making by large corporations – the Algorithmic Accountability Act – was introduced in both the U.S. House of Representatives and the U.S. Senate in 2019, but did not become law. [13]

Hong Kong’s Privacy Commissioner for Personal Data, while endorsing the territory’s “principle-based and technology neutral” data protection law, suggested that added protection should come from holding businesses and other organizations that use data to a higher ethical, but non-regulatory, standard “alongside the laws and regulations”.[14]

Similarly, Singapore’s Personal Data Protection Commission has endorsed a norms-based approach that encourages the adoption of ethical principles by private sector entities. One of these principles is explainability. Though Singapore law does not impose these requirements, the Commissioner writes that, “[w]here the effect of a fully-autonomous decision on a consumer may be material, it would be reasonable to provide an opportunity for the decision to be reviewed by a human”.[15]

Responsible AI practices can also be advanced in other ways beyond regulation. There have been a plethora of studies, working papers, and guidelines on the responsible uses of AI. For example, ITechlaw’s Responsible AI: A Global Policy Framework (and 2021 update)[16] presents a policy framework for the responsible deployment of artificial intelligence that is based on “best practice” principles. In addition, many other organizations including the IEEE and NIST have developed, or are in the process of developing, standards for the uses of AI in automated decisions. The Law Commission of Ontario also recently published a report, Legal Issues and Government AI Development: Workshop Report, which produced a summary of eight major themes and insights into the use of AI systems for decision making.

The level of engagement on these issues internationally highlights the fluidity of the analysis and suggests caution in regulatory approaches the impacts of which are hard to predict.

The Proposed regulation of automated decision-making in Canada: an “appropriate” response?

PIPEDA does not have any express provisions that deal with automated decision making. However, PIPEDA is principle based and intended to be technologically neutral. Its general provisions have been applied to the collection, use and disclosure of personal information by automated means such as most recently in the Cadillac Fairview and Clearview AI decisions of the OPC, and there is no reason to think it would not also apply to the uses of personal information for automated decision making.

As such, as under the GDPR, PIPEDA’s fair information practice principles would likely apply to the collection, uses, and disclosures of personal information by automated means including those pertaining to consent, identifying purposes, data accuracy, limiting collection, use, disclosure and retention, openness (transparency), and individual access. Further, s. 5(3) of PIPEDA has an overriding “appropriate purposes” limitation whereby “[a]n organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances”. Accordingly, as the Clearview AI case demonstrates, PIPEDA already is capable of addressing some of the potentially problematic challenges associated with the use of automated technologies that use personal information.

In 1995, the European Union adopted the GDPR. Section 25 of this Directive prohibits member states (and companies within their borders) from transferring personal data to a third state whose laws do not adequately protect the data. Transfers to non-member states may occur if the European Union determines that the privacy protection regime of such jurisdictions is “adequate” (or if other specified protective measures are put in place by the transferring entity). For the purposes of Section 25, Canada's PIPEDA received a favourable “adequacy” determination in 2001.

Under GDPR, the European Union must again assess the adequacy of PIPEDA's protections, an exercise to which it will be invited every four years. As part of this assessment, the Europeans will assess Canadian privacy laws in light of the new, higher standards of protection set out in the GDPR.

In it is not surprising therefore that both the Quebec and federal government have looked to GDPR standards, including in relation to automated decision-making, when proposing their respective overhauls of the existing privacy regulatory framework.

Quebec and Bill 64 proposal to regulate automated decision making under privacy laws

Québec’s proposed Bill 64 would require public bodies and enterprises to provide certain information to the person concerned when they collect personal information using technology that includes functions allowing the person to be identified, located or profiled, or when they use personal information to render a decision based exclusively on an automated processing of such information. It establishes a person’s right to access computerized personal information concerning him or her in a structured, commonly used technological format or to require such information to be released to a third person.[17]

New Section 65.2 addresses the obligations of public bodies.

65.2. A public body that uses personal information to render a decision based exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.

It must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

New Section 12.1 addresses the obligations of private sector enterprises:

12.1. Any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.

He must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

The person concerned must also be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.

Under Bill 64, a monetary administrative penalty may be imposed on anyone who does not inform the person concerned by a decision based exclusively on an automated process or does not give the person an opportunity to submit observations, in contravention of section 12.1.

The most striking difference between the approach to regulating automated decision-making under GDPR and Quebec’s Bill 64, is that whereas Quebec appears to have imported the concepts of transparency and explainability from GDPR (and other well-known policy frameworks), the legislator has chosen not to import GDPR’s prohibition against the use of automated decision-making (although the Bill would impose fines on enterprises that do not inform individuals of the fact that they have been subject to a decision based exclusively on an automated process or of their right to submit observations on the decision to a member of the personnel of the enterprise who is in a position to review the decision). In this regard, the lighter regulatory burden is a welcome departure from GDPR, since the policy objective of insisting upon maintaining a “human in the loop” in relation to all automated decision-making is far from clear, especially where the subjects of such decisions are provided with general details as to how such decisions are rendered and maintain their contractual or statutory rights to contest outcomes.

CPPA

The CPPA’s provisions are also intended to be technologically neutral. Accordingly, one would expect that these provisions would, like PIPEDA, generally apply to automated decision making (as well as profiling).

The CPPA would add several new provisions to promote transparency with respect to automated decision making and to introduce explainability obligations with respect to such decisions.

Speaking to the House of Commons on November 24, 2020 – in moving that the proposed CPPA be read a second time and referred to committee – Minister Bains described the purpose of the bill’s automated decision making provisions as follows:

In the area of consumer control, Bill C-11 would improve transparency around the use of automated decision-making systems, such as algorithms and AI technologies, which are becoming more pervasive in the digital economy.

Under Bill C-11, organizations must be transparent that they are using automated systems to make significant decisions or predictions about someone. It would also give individuals the right to an explanation of a prediction or decision made by these systems: How is the data collected and how is the data used?

As indicated above, the CPPA would define an “automated decision system” as

any technology that assists or replaces the judgement of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets. (système décisionnel automatisé)”

Section 62(1) (Openness and transparency) would require organizations to provide a general account of their practices with respect to making automated decisions:

62 (1) An organization must make readily available, in plain language, information that explains the organization’s policies and practices put in place to fulfil its obligations under this Act.

Additional information (2) In fulfilling its obligation under subsection (1), an organization must make the following information available…

(c) a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them;

Section 63 (Information and access) would also create a new explainability obligation for automated decisions.

63 (1) On request by an individual, an organization must inform them of whether it has any personal information about them, how it uses the information and whether it has disclosed the information. It must also give the individual access to the information…

Automated decision system

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision and of how the personal information that was used to make the prediction, recommendation or decision was obtained.

Comments on Bill 64 and the CPPA’s proposed approach to regulating automated decision making using privacy laws

For starters, the CPPA definition of “automated decision system” is exceptionally broad. It is not limited by classes of technology, the products, services, or sectors of the economy that will use it, the nature of the automated decisions that the systems could implicate, the classes of individuals that could be affected by decisions, or nature or the significance of the impacts of the decisions to individuals.

The legislation also applies to technology that “assists or replaces” human judgment. This accounts for two broad categories of decision making processes.

First, there are processes in which decisions are ultimately made by a “human in the loop”, rather than by an AI system without human intervention. Think of a system that screens immigration applications by applying prescribed criteria to make a recommendation to a human official, who must then decide whether to accept or reject the application. The “automated decision system”, as defined in the CPPA, will have assisted human judgment, not replaced it.

Second, there are processes without a “human in the loop”, in which the AI system makes decisions without human intervention. Here, technology will have replaced human judgment as opposed to merely having assisted it. Think of a computer program that marks multiple-choice exams, determines the distribution of scores, and applies a curve to assign grades. Or, consider software that a bank might use to decide loan applications by assessing an applicant’s creditworthiness based on their personal information.[18]

In addition to applying to a wider range of decisions because it includes AI systems that “assist” in making decisions, it is also much broader as it would apply to make “predictions, recommendations or decisions about individuals”. The transparency and explainability obligations are also much broader than those in the EU because they are not limited to decisions which “produces legal effects” “or similarly significantly affects” on individuals. As noted above, the transparency obligation would apply where “predictions, recommendations or decisions about individuals could have significant impacts on them”, while the explainability obligation has no such limitation.

There are pros and cons of the proposed approach to regulating automated decision making under privacy laws.

Transparency and “explainability” invite regulation because of how automated decision systems can use personal information to make or inform determinations that can have legal or other significant consequences for individuals. The logic of the proposed CPPA is that, just as Canadians should have the right to know which personal information an organization has on file about them and how the organization uses that information, they should also be entitled to know whether the organization uses an automated system to make decisions, why any such decision was made, and how their personal information was used in making it. The proposed new requirements track those that the federal government has imposed on itself.

Requiring organizations to be transparent about their use of automated decision systems and to provide explanations of how those systems have made particular decisions can further the objective of building trust in automated decision systems.[19] Transparency and explainability requirements may prevent or expose erroneous or abusive automated decisions, such as decisions that unlawfully discriminate. Transparency and explainability rules may also ensure that Canadians who are affected by automated decisions have the information they may need to challenge them.

The proposed CPPA’s requirement, in section 66(1), that an explanation “must be provided … in plain language” is ostensibly similarly motivated, though it belies both the potential difficulty of explaining predictions (and thus decisions) made by “black box” AI systems,[20] and the potential limitations of a non-expert’s ability to understand an explanation in a way that builds trust rather than promotes confusion or even misplaced confidence.[21] Moreover, requiring transparency could increase the risk that the security of automated decision systems – or, more accurately, of the data they use to make or inform decisions – will be compromised, or a company’s trade secrets or intellectual property will be publicly exposed.[22]

The scope of decisions to which the provisions apply is also extremely broad, especially given how pervasive and ubiquitous AI applications will become. The transparency obligation is somewhat narrowed by the qualification that it applies only to “predictions, recommendations or decisions about individuals that could have significant impacts on them”. However, given that AI systems can make decisions across ecosystems of products and services and that the types of “impacts” are not limited to specific categories such as those that have legal significance, there will likely be many more organizations affected by the obligations than are probably intended. As noted above, the explainability obligation has no limitations either as to whether the decision is made solely by an automated decision making process or that the decision has any significant impact on an individual. In fact, subsection 63(3) is not even expressly limited to decisions that involve the use of personal information.

As many organizations have for decades used relatively less sophisticated prediction and decision making tools that “assist” in making decisions, both Bill 64 and the CPPA could now sweep in decision making processes that heretofore never had to be disclosed or explained. As well, the transparency and explainability obligations are not technologically neutral, as similar obligations are not imposed under Bill 64 and the CPPA for decisions that are made using traditional manual processes.

But the choice to use privacy law, and to legislate specifically in relation to automated decision systems, raises important other policy considerations, even despite the proposed Bill 64 and CPPA’s more limited scope than the GDPR.

Automated decision systems use personal information to draw inferences, make predictions, and either offer recommendations or take independent decisions. Individuals have privacy interests in whether and how their personal information is used to make decisions about them.[23] This is what justifies using privacy law to regulate automated decision making; it is the field of legislation that is concerned with protecting individuals from the misuse of their personal information.

The automated decision systems provisions of Bill 64 and the CPPA, like the comparable provisions in the GDPR, are not, however, truly directed at privacy-related mischief. They regulate the use of particular technologies more than the use of information – though, due to the nature of the technologies in question, the line is admittedly difficult to draw. Further, the goals are not the protection of reasonable expectations of privacy – which is what privacy laws advance - but to avoid other harms such as ensuring that decisions are not biased or inaccurate.

If the purpose of regulating automated decision making is to avoid potential harms, then regulation should ostensibly be concerned not only with means, but also with ends. These ends – i.e., the actual decisions that automated systems make – are already regulated under numerous existing frameworks.[24] To the extent that Bill 64 and the CPPA add value to the regulation of automated decision making by focusing on means of automated decisions, the extent of that contribution can only be measured in light of how the law already governs the ends of automated decisions. For example:

  • Competition law already governs interactions between firms with respect to consumer-facing decisions, particularly about pricing. The involvement of automated decision systems does not change these rules or their application, as the Competition Bureau has confirmed.[25]
  • Consumer protection law already governs firms’ behaviour vis-à-vis customers. For example, if a business employs an automated decision system in its e-commerce activities, and the automated decision system makes a false, misleading, or deceptive representation to a consumer, then the consumer may seek to avail themselves of the protections of federal (under the Competition Act) or provincial consumer protection law.[26] Similarly, to the extent that an organization uses an automated decision system for credit rating or other consumer reporting, its activities will presumably be governed by the same statutory frameworks that apply in the absence of an automated decision system.[27]
  • Human rights law already prohibits unlawful discrimination. This includes human rights legislation that governs transactions between private parties,[28] as well as the Canadian Charter of Rights and Freedoms, which governs transactions between the state and private parties. If automated decision systems cause organizations (including government agencies) to run afoul of these anti-discrimination measures – by manifesting “algorithmic bias”, for example[29] – then existing law will be available to respond and impose available sanctions.[30]

The regulation of automated decision making under Bill 64 and the CPPA would overlap with these and numerous other existing legal and regulatory frameworks. The result would be the regulation of the means of automated decisions under privacy law, and the regulation of their ends and perhaps also the means under other regulatory frameworks. This creates the possibility not only of a duplicative compliance burden, but also of duplicative enforcement by different regulators at different levels of government, both federal and provincial. Moreover, the regulation of automated decision making under privacy law will likely result in privacy commissioners such as the OPC, with limited or no expertise in the other regulatory areas, becoming mixed up in areas that are better handled by the existing regulatory regimes already in place. Expanding the federal privacy laws into a myriad of areas under provincial jurisdiction also raises new constitutional division of powers issues.

If the true objective of the new transparency and explainability rules is more broadly to promote the responsible deployment of automated-decision systems, it may be more prudent to consider either stand-alone legislation or “best practice” standards, rather than to artificially extend privacy law beyond its natural confines. The draft European Regulation for Harmonizing Rules on Artificial Intelligence, referenced above, provides an example of a “fit for purpose” stand-alone legislative framework (one that will no doubt generate a lot of interest and debate over the coming months). The ITechLaw Responsible AI: A Global Policy Framework (and 2021 update) provides an example of the latter “best practice” industry standard approach.

As governments, organizations, and other members of the public grapple with privacy issues associated with the responsible deployment of automated decision systems, the various regulatory options and choices are starting to come into focus. However, moving forward with Bill 64 and the CPPA’s obligations related to automated decision making requires a robust and informed discussion about the implications of using privacy law to regulate ethical uses of AI.

This blog post was first published by Barry Sookman on his blog @barrysookman.com.