Bot or human? The answer to that question has recently taken on legal consequence in California, when Governor Jerry Brown signed Senate Bill (SB) 1001 on September 28, 2018. SB 1001, which will go into effect on July 1, 2019, prohibits automated online accounts (commonly known as “bots”) from communicating with a person to incentivize that person in California to purchase goods or services, or to influence a vote in an election, unless the “person using a bot … discloses that it is a bot.”
This new bot law, sponsored by State Senator Robert Hertzberg, is the first of its kind in the United States and comes at a time of divisive political dialogue regarding election meddling and the large-scale deployment of bots on social media platforms. With the surging use of bots for commercial and political purposes, businesses deploying, or thinking of deploying bots, should carefully consider this new measure when developing such automated tools.
“Clear, Conspicuous” Disclosures Are the Focus of the Bot Law
This new measure focuses on the intent of person(s) using a bot to communicate with another person in California “online.” Specifically, persons intending to mislead the recipient of the communication about the identity of the communicating bot for commercial purposes or to influence a vote in an election would be violating this new statute. However, no liability arises if the person initiating the communications discloses that (s)he is doing so through a bot. The key language of the statute states:
It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if that person discloses that it is a bot.”
Notably, the person using a bot is not liable if the person discloses that it is a bot in a “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts … that it is a bot.” Significantly, this measure does not impose any new duties upon service providers of “online platforms,” including, but not limited to web hosting and Internet service providers.
As drafted, the reach of the measure is narrowed by the statutory definition of a “Bot,” which is an “automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” Further, an “online platform” is defined as “any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.” Further, a “person” is defined broadly to include not just a “natural person” but also any “corporation, limited liability company, partnership, joint venture, association, estate, trust, government, governmental subdivision or agency, or other legal entity or any combination thereof.”
Note that there is scienter requirement with the “intent” and “knowingly deceive” language, which necessarily makes it more difficult for a prosecutor or plaintiff to prove, however that will likely not stop governments and individuals from trying to enforce this law. SB 1001 provides no private right of action and makes no mention of penalties or enforcement; however it is likely that enforcement may fall to the California Attorney General’s Office (and perhaps district and city attorneys as well), using California’s expansive Unfair Competition Law (UCL), which provides for penalties of up to $2,500 per violation as well as equitable remedies. Private plaintiffs may also try to use the UCL to seek injunctive relief and restitution for violations of the bot law.
Broader Context of the Bot Law
State Senator Hertzberg, the sponsor of this measure, has his own automated bot on Twitter, @Bot_Hertzberg, which is primarily used to raise awareness of this issue and highlight the manner in which a Twitter bot might disclose its automated nature. SB 1001 comes in the context of broader debate over transparency and potential First Amendment rights of persons that may use bots for creative outlets and non-commercial purposes. For example, organizations like Electronic Frontier Foundation, have argued that many bots are simply an outlet for their human creator, and in some cases, disclosing the fact that the bot is a bot could hinder the human creator’s ability to express him or herself. Indeed, the current version of SB 1001 is significantly narrower than a prior version, seemingly to avoid constitutional challenge to a bill that broadly prohibited anonymity among bots, rather than trying to distinguish between bots and humans for particular purposes.
Additionally, there are concerns that attempts at transparency could extend beyond the typical bot on a social media platform to other systems powered by artificial intelligence. For example, Google’s Duplex voice system has released a system that engages in conversations with humans which many have said are indistinguishable from communications with a person. Passage of SB 1001 may spur additional proposals to require disclosures where a person cannot tell whether they are communicating with an AI system. Indeed, California Senator Dianne Feinstein has proposed a bill that would require social media companies to disclose all bots that operate on their platforms and prohibit U.S. political campaigns from using social media bots for political advertising. These issues may lead some to redouble efforts to mandate “algorithmic transparency” and related concepts surrounding disclosure obligations for AI systems and processes. Davis Wright Tremaine will continue to monitor and report on these trends as they develop.