As previously reported on DWT’s AI Law Advisor blog, beginning on July 1, 2019, a new California law makes it unlawful for any person to use a “bot” to communicate with a person in California “online” with the intent to mislead the person about the bot’s “artificial identity” in order to incentivize the purchase of sales or goods, or to influence an election.
However, persons can qualify for a safe harbor from liability under California’s so-called “Bot Disclosure Law” by simply disclosing that the communications are generated by a bot, rather than a person. Thus, companies using bots online to engage in commerce or political activities that may reach users in California should consider promptly adopting disclosures sufficient to ensure no liability arises under this new law.
The new Bot Disclosure Law, codified in California’s Business & Professions Code §17940, et seq., prohibits automated online accounts (commonly known as “bots”) from communicating with a person to incentivize that person in California to purchase goods or services, or to influence a vote in an election, unless the “person using a bot … discloses that it is a bot.” Specifically, as of July 1, it is unlawful for:
… any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.
Under the new Bot Disclosure Law violators may be subject to a $2500 per violation under the California Unfair Competition Law. Thus, a website or other application engaging multiple unique visitors on a daily basis and which is deemed to be violating the statute could quickly be subject to penalties totaling hundreds of thousands of dollars, thus forcing companies to consider disclosing bot usage in all communications or interactions to avoid liability. Further, although the statute does not provide a private right of action, some have speculated that class action plaintiffs may try to argue that the use of bots to incent people to engage in commercial or political activities, without properly disclosing the identity of the bot, could violate the “fraudulent” prong of California’s Unfair Competition Law.
Fortunately, there exists a “safe harbor” from liability available to any person disclosing that the communication is made by a bot (rather than a person). In the statute’s own words: “A person using a bot shall not be liable under this section if the person discloses that it is a bot.” Although the statutory language is somewhat contradictory, the legislature’s intent is clear –increase transparency for bot-based communications with human beings. Thus, entities can operate under the safe harbor without risk of liability simply by making such a disclosure when using bots in commercial or political activities that reach individuals in California. The statute requires that such disclosures must be “clear, conspicuous and reasonably designed to inform person with whom the bot communicates or interacts that it is a bot.” Practically speaking, since it would be difficult to discern which persons receiving the communications are in California, it may be necessary to apply these disclosures for all communications the bot makes.
The reach of the measure is further narrowed by the definition of a “bot,” which the statute defines as an “automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” However, the statute makes clear that the new measure does not impose a duty on service providers of “online platforms,” which are defined as public-facing Internet Web sites, applications or digital applications (including social networks) that have 10,000,000 or more unique monthly U.S. visitors or users. Thus, the new law exempts large ISPs, web hosting companies, and social networks from its scope.
Understanding the Reach of the New Bot Disclosure Law – Is Your Bot Covered?
This new law reaches very specific types of bots –automated bots that communicate with persons in California to influence commerce or an election. Consider the following questions to determine whether your bot falls within the scope of this new measure:
- Is your bot an “automated online account where all or substantially all of the actions or posts of that account are not the result of a person”?
In other words, if the bot is powered by machine learning or other forms of artificial intelligence technology that initiates or responds to communications with persons in California without substantial oversight or human involvement (i.e., without humans in the loop), it is likely covered by the new law. Bots powered primarily by persons are not covered by this measure.
- Is the communication or interaction with a person in California “online”?
Remember, this measure only reaches bots communicating with persons “online.” The statute defines the term “online” as “appearing on any public-facing Internet web site, web application, or digital application, including a social network or publication.” This broad definition arguably captures any communications generated by a bot that exists on a publicly available website, user platform, or application. Left unanswered is the question of whether this measure covers interactive voice recognition systems provided via telecommunications services, or whether communications delivered to consumer devices in the home could be construed as covered by the new law.
- Finally, does your bot communicate or interact with a person in California in order to incentivize a purchase or sale of goods or services in a commercial transaction, or to influence a vote in an election?
The third element of the new law limits the scope of the problem to only those communications which are focused on commerce or influencing a vote. Thus, automated bots used for non-commercial or non-political activities, such as reporting on news stories or distributing information about matters of public interest would not be covered.
Note that if you answered yes to all of these questions, your bot is covered by the new law. In order to avoid liability, covered entities should immediately consider adopting appropriate disclosures or labels for their bots, discussed below.
Complying with the New Bot Disclosure Law - Crafting Sufficient Disclosures
If you determine your bot and its activities are subject to the new law, you can qualify under the safe harbor exemption simply by disclosing that you are using a bot to communicate.
The safe harbor disclosure must be presented in a manner that is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts … that it is a bot.” While “clear, conspicuous, and reasonably designed” is not defined in the Bot Disclosure Law, the legislative history recommends following Federal Trade Commission (“FTC”) guidelines from 2013 on drafting “clear and conspicuous disclosures.” In those guidelines the FTC explains that clear and conspicuous disclosures include many of the following features: prominence, proximity to relevant information, free from distractions, which are repetitive and use understandable language. In addition, the FTC discourages the use of “pop-ups” that could be blocked by a user’s software. One obvious solution, and that which is consistent with the general intent of the new law, is simply to identify your use of a bot with the word “bot” clearly and prominently displayed in the name, as some companies have already begun to do, although that approach may be less appealing for companies whose customers may be put off by such a message.