As artificial intelligence systems become more prevalent in daily life, efforts to create a unifying set of AI principles have intensified. In the past few months, at least three major works have been published on the issue. The Institute of Electrical and Electronics Engineers authored the first edition of Ethically Aligned Design (EAD1e), a comprehensive three-year project that focuses largely on AI ethical issues. The International Technology Law Association released its in-depth review of ethical guideposts in Responsible AI: A Global Policy Framework in June 2019 (Disclosure: Stuart Meyer is a co-author of Responsible AI). And the Organisation for Economic Co-operation and Development published a set of intergovernmental policy guidelines on AI, all of which were adopted by 42 participating countries.

All three of these publications seek to address vital questions, such as how AI systems should handle human safety concerns. Many of these issues combine philosophy, social norms and engineering design.

Here, we focus on a class of problems that seem either unique in AI systems, or at least present in AI systems to a degree not common elsewhere. Specifically, we address a natural tension between traditional intellectual property rights and a commonly stated ethical goal of ensuring some degree of transparency or explainability in AI systems. Think of a hypothetical AI system for determining whether someone has committed a criminal offense beyond a reasonable doubt, or a system that assigns transplant organs to waiting patients based on a wide variety of factors. It is natural that those impacted (if not society as a whole) will want to know how the systems made those decisions. Yet IP rights — including copyright, trade secrets and patents — may well be implicated by that need.

In this article, we outline the extent to which each of the three recent publications attempt to address this tension. We conclude by offering observations on analogous technological and IP issues that could provide additional guidance on ways that this tension might be handled.

To ensure that AI systems are created with consideration of their ethical ramifications, EAD1e presents a set of general principles to inform standards and policy making going forward. Two of these principles, referred to as accountability and transparency, propose that AI systems “provide an unambiguous rationale for all decisions made” and that the basis of an AI decision “should always be discoverable.” Without accountability and transparency, EAD1e warns that the design and development of AI systems will face challenges regarding ethical implementation and oversight.

Despite this warning, EAD1e does not attempt to explicitly challenge the role of traditional IP rights. For instance, EAD1e recognizes that algorithmic transparency does not necessarily supersede “intellectual property that cannot, or will not, be released to the general public.” It nonetheless calls for standards “to be created to avoid harm and negative consequences.” There is little emphasis given to this issue; to the contrary: at one point, EAD1e asserts that trust or distrust in an AI system may be fostered in numerous ways. “In such circumstances, the challenges presented by the other principles, e.g., the challenge of adhering to the principle of transparency while respecting intellectual property considerations, may become of secondary importance.” As such, EAD1e recommends that creators “describe the procedures and results of [AI] testing in clear language that is understandable to both experts and nonexperts, and should do so without disclosing intellectual property.” However, there is no discussion of how reasonable it is to do so. For example, it is the details of a training set corpus and weightings of various factors that may well determine whether an AI system is fair or biased. How such details might be provided without implicating trade secrets is unclear, though a later discussion suggests that IP concerns might be mitigated by permitting only, for instance, a “public interest steward” with access to such detailed intellectual property.

EAD1e asserts that policymakers creating frameworks for realizing AI transparency “should consider the role of appropriate protection for intellectual property, but should not allow those concerns to be used as a shield to prevent duly limited disclosure of information needed to ascertain… acceptable standards of effectiveness, fairness, and safety.” Thus, policymakers should recognize that the level of disclosure will depend “on what is at stake in a given circumstance.”

Responsible AI argues that AI systems have a duty of transparency, as analogous to that of data protection regimes around the world (e.g., Article 5 of the European General Data Protection Regulation). However, the publication advocates, in addition to transparency, that AI systems “provide information about how exactly a certain output was produced” (also referred to as “explainability”). The importance of transparency and explainability centers around maintaining the public’s trust; it also serves as an accountability tool as AI continues to be developed and implemented.

Despite the policy implementations Responsible AI suggests, it also realizes there are limits. For example, the publication concedes that trade secrets present a unique challenge to transparency and explainability since “private businesses have legitimate interests in keeping their valuable know-how secret,” as do public authorities in some respects to law enforcement. Responsible AI also warns of a person’s ability to cheat or alter systems, such as an algorithm that assists in a job application process, if all information becomes widely available. Indeed, at a certain point, a high amount of transparency might “encourage or exacerbate” manipulation and abuse. And in some cases, where traditional IP protection is not available, strict secrecy is “the only viable option to protect valuable know-how and exclusivity.”

To alleviate these concerns, Responsible AI recommends that regulatory principles not “unduly limit the ability of AI developers and adopters to protect their proprietary algorithms, business secrets and know-how, and to safeguard a competitive advantage they may have compared to their competitors.” This is achieved through a balancing of conflicting interests—a process that is commonly used in existing data protection laws.

Finally, Responsible AI addresses the challenge of balancing transparency with traditional IP rights by relating the tension to previously seen issues. For example, the evolution of IT auditing has been able to keep pace with advances in technology, while not jeopardizing IP rights. Similarly, it can be anticipated that protocols, like algorithm audits, that are “developed to ensure that AI systems are explainable will evolve to accommodate AI system issues in these areas.” As such, Responsible AI explains that transparency and explainability can be achieved by relying on the assumption that algorithm audits will be able to alleviate the burdens of balancing interests—if such a balance can be achieved in the first place.

Responsible AI also includes an entire chapter on IP rights. This discussion is geared toward “achieving the right balance between the owner or developer of the AI technology and those third parties using the technology, and also as to the provider(s) of data sets.” One relevant area discussed in this section is whether “non-expressive” use of a copyright-protected work (e.g., for purposes of explaining how a system has come to a decision) could be treated as a non-infringing use, compared with creation of copies of the work for commercial exploitation. In a related area, Responsible AI reports that some support a view that text and data mining (TDM), for example as used to train AI systems, perhaps should not be under the control of rights owners based on the idea-expression dichotomy of copyright law. Responsible AI uses as an example of a TDM exception a new provision of copyright law passed by the Japanese Diet last year, “which basically allows for text data mining by all users and for all purposes, both commercial and non-commercial.” In the area of trade secrets, the IP chapter of Responsible AI asserts that “it may be that seeking to rely on trade secrets for protection of AI inventions or data sets is contrary to requirements or desire for transparency and fairness with regard to AI.” However, no suggestion is given for where to draw the line between IP rights and those transparency/fairness goals. The chapter warns that any urge to create new laws to address these issues be tempered by a cautious approach, lest it further “complicate an already complex bundle of IP rights.”

The Organisation for Economic Co-operation and Development

With the support of 42 countries, including the United States, the OECD’s AI principles provide a historic step in AI governance. Unlike EAD1e and Responsible AI, the OECD serves to encourage international cooperation and facilitate global relevance through the world’s first intergovernmental AI policy guidelines. However, the OECD fails to provide any more clarity than either of the other publications regarding the intellectual property tension. And, one might argue, the high-level overview results in even more ambiguity and vagueness.

As stated principles for responsible stewardship of trustworthy AI, the OECD recognizes transparency and explainability as well as accountability to be important objectives. It finds that AI actors should “provide meaningful information, appropriate to the context, and consistent with the state of art.” Specifically, the OECD challenges AI actors to (1) make a general understanding of AI systems known, (2) ensure awareness of AI systems, such as the workplace, (3) help those affected by an AI system understand outcomes and (4) assist those who are adversely affected by an outcome to understand the factors and logic that serve as a foundation for the AI system’s decision.

Despite these goals, OECD addresses the tension with IP only by stating that participating countries recognize “certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including… intellectual property rights… while noting that the appropriateness of some frameworks may need to be assessed and new approaches developed.” But OECD provides no clear indication of how to assess the level of appropriateness for any given framework, leaving the participating nations with little guidance on how to reconcile tensions of transparency goals with IP rights.

Where to Look for Solutions

While the three recent publications attempt to shed light on the tension between IP rights and transparency goals, the dilemma is far from being solved. Unfortunately, we do not purport to provide the answer here either. Instead, we suggest that discussion of appropriate responses can look to various analogues where solutions have previously been found. One way to reconcile this conflict is to draw from existing exceptions within IP regimes today that balance transparency and traditional IP rights.

Trade secret law provides at least two such examples. It is now common to require trade secret plaintiffs to identify with some specificity their trade secrets before being granted the ability to take discovery related to alleged misappropriation. See, e.g., the new Massachusetts version of the Uniform Trade Secrets Act at § 42D(b) and California’s longer standing Civ. Proc. Code § 2019.210, as well as case law and local rules along the same lines. A second example is the federal Defend Trade Secrets Act, which expressly allows for disclosure of trade secrets by whistleblowers under certain instances, e.g., 18 U.S.C § 1833. As such, one possible approach for resolving this IP tension would be to employ similar provisions where appropriate to require limited disclosure, in a manner that maintains trade secrecy as much as possible, where such disclosure is needed to satisfy other AI infrastructure requirements, such as explainability of a critical decision made by an AI system. In certain circumstances, such disclosure might need to be limited in scope but public, while in others it could be broader but made only in camera to a special master, for instance.

Likewise, copyright law has long recognized the need to strike a balance between private ownership and public welfare. In fact, a recent development further emphasizes the limits of copyright infringement. A recent D.C. Circuit case held that safety standards and codes that private standards development organizations had coordinated and published, and, which various governments had then incorporated by reference into law, was not necessarily infringed after a nonprofit corporation posted the standards and codes to the internet. The court reversed a summary judgment in favor of the plaintiffs, remanding the case for the district court to correct errors in its fair use analysis. (“[t]he task is not to be simplified with bright-line rules, for the statute, like the doctrine it recognizes, calls for a case-by-case analysis”). Thus, the balancing done in this area is similar to the task of addressing the tension we outline here. (The D.C. Circuit reserved the issues of copyrightability and merger doctrines in the case, directing the district court to address fair use specifically and suggesting that the other issues might come to the fore if necessary later in the case. Note: Fenwick is representing the defendant pro bono.)

Related concerns have also been dealt with in patent law. In the United States, § 287 of the Patent Act limits, at least to some degree, patent infringement remedies for certain medical activities. Additionally, India has historically contemplated limitations to pharmaceutical patents. Before 2005, India denied medicine patents and granted compulsory licenses to allow generic drug manufacturers to create cheaper alternatives. This was an effort to provide affordable drugs to the poor. Since the country became a signatory to the World Trade Organization, however, India has faced pressures to follow patent regimes similar to that in the United States. The country continues to decide how to align its patent policy with the world market, but India provides an alternative scenario to how patent rights could be balanced. (For more information, please see this article.) These two examples show the influence of evolving societal interests on patent protections as technology has evolved.

While the task is difficult, the tension within AI systems is not the first time that societies have had to consider balancing IP rights against other critical issues. As AI continues to grow, policymakers (and indeed society overall) will have to face the issue head-on and make some very important decisions. The three recent publications on ethical AI principles are just the beginning of a larger conversation.