Over the past year, the FTC has signaled its intention to bring enforcement actions against businesses that participate in the dissemination of disinformation on social media platforms, especially disinformation spread by malicious social media bots (“social bots”). While some doubt that the FTC Act applies to the use of social bots, the FTC has made clear that providing customers with “deceptive and inaccurate information online,” “pollutes the e-commerce marketplace and prevents customers from making informed purchasing decisions.” Accordingly, businesses that engage in e-commerce should review their promotion and advertising practices to ensure that they do not run afoul of the FTC.
The first indication of the FTC’s focus on social bots was its October 2019 settlement with Devumi, Inc. The FTC had alleged that Devumi sold to its customers “followers,” which were, in fact, bots designed to appear as real people with real social media accounts. Devumi’s activity fell within the FTC’s enforcement authority because the quantity of “followers” is one of the indicators of social media (i.e., the number of a social media account’s followers) that people use in “making, hiring, investing, purchasing, licensing, and viewing decisions.” It is telling that the FTC did not bring the action against Devumi’s customers, whose metrics were presumably artificially increased. Instead, the FTC pursued Devumi, the entity which provided the “means and instrumentalities” to commit deceptive acts or practices, i.e., “to exaggerate and misrepresent their social media influence,” thereby enabling them to deceive consumers.
Congress seemed to pay attention to the FTC’s settlement with Devumi. In December 2019, the Senate Appropriations Committee noted its concern over the “sophistication and rapidly expanding scope of social media bots” and directed the FTC to submit a report to help it understand the impact on consumers of the use of these bots in advertising. The Appropriations Committee also ordered the FTC to include in the report a discussion of how social bots’ “use might constitute a deceptive practice” under the FTC Act.
In July 2020, the FTC submitted its Report, which asserted that the use of bots in social media fell “squarely within” its enforcement authority, as “more than 90%” of such bots “are used for commercial purposes.” At the same time, the Report pointed out that the FTC Act also “constrain[s]” the FTC’s authority to counteract the spread of social bots because it requires the FTC to show, “in any given case[,] that the use of such bots constitute a deceptive or unfair practice in or affecting commerce.” While the Devumi situation fit within the traditional analysis required by the Act, the FTC reminded Congress that “each fact pattern must be analyzed on a case-by-case basis.”
Separately from but simultaneously to the FTC’s Report, FTC Chairman Rohit Chopra issued a strong statement asserting that social bots are a significant concern for society at large due to their role in disseminating false information on social media platforms. Commissioner Chopra argued that social media platforms should not be permitted to police themselves because their “core incentives do not align” with the goal of eliminating social bots. This is because bots spread inflammatory and false content that increases users’ engagement with the platforms – whether positively or negatively – as well as inflating the price of digital advertising. Thus, perversely, social media platforms could stand to benefit financially from bots which spread disinformation. Commissioner Chopra does, however, believe that current law can be used to combat the bots and provides several suggestions on how the FTC can do so:
By holding social media platforms accountable for providing potential advertisers with unsubstantiated and fraudulent metrics regarding user engagement, which “likely violates FTC Act’s prohibition on deceptive acts or practices.”
By writing rules to increase accountability and transparency regarding “undisclosed influencer connections and deceptively formatted ads.” Commissioner Chopra further noted that “Congress may … need to reassess the special privileges afforded to tech platforms, especially given their vast power to curate and present content in ways that may manipulate users.” He also emphasized that the FTC “must also fundamentally reform its approach to fake reviews,” as many of them are being generated using bots.
By applying to a wider array of commercial activity the FTC’s policy that requires social media influencers to disclose their material connections to the product that they are promoting. Such commercial activity would include the a “for-profit enterprise [that] offers surreptitious manipulation services to denigrate a commercial competitor or political opponent.”
Under Commissioner Chopra’s leadership, the FTC has pushed for aggressive remedies against offending companies and is diminishing its reliance on “no-money, no-fault settlements.” As such, it can be expected that the FTC’s increasing attention to social bots will be result in vigorous enforcement actions. Thus, companies engaged in e-commerce should take the time to consider whether their business models make them vulnerable to the FTC’s attention.