Sniffing something fishy in the sea of consumer reviews, the National Advertising Division (NAD) snapped its jaws at advertising claims made in television commercials, infomercials, and on the Web by Euro-Pro Operating for its Shark brand vacuum cleaners. The advertising was brought to the NAD’s attention by competing vacuum cleaner manufacturer, Dyson, Inc. The claim at issue was:
“America’s Most Recommended Vacuum Brand.*
*Based on percentage of consumer recommendations for upright vacuums on major national retailer websites through August 2013, U.S. Only.”
What was special about this case was that Euro-Pro sought to substantiate its “most recommended” claim on aggregated consumer reviews.
The first issue was, what did the claim really mean? Dyson said it meant that the Shark is the most recommended vacuum among vacuum cleaner owners, nationwide, and that the claim communicated a comparative message, namely that the Shark was recommended over other brands. Euro-Pro, on the other hand, thought the claim was as clear as Caribbean water: the Shark is “America’s Most Recommended Vacuum Brand” “based on percentages of consumer reviews for upright vacuums on major national retailer websites through August 2013.” There was nothing comparative about the statement, according to the advertiser. Interestingly, the NAD tended to side with the advertiser’s interpretation, namely that the claim “America’s Most Recommended Vacuum Brand*” reasonably conveyed a message that Shark is the most recommended vacuum brand among American vacuum cleaner consumers. However, it interpreted the asterisked second part of the claim to be an explanation of how Euro-Pro sourced the data on which it based its claim. So, the Shark wins, right? Not so fast.
The second (and ultimately determinative) issue was whether the basis of the claim was sufficiently reliable in order to form a reasonable basis for the claim. Here is where the NAD really sunk its teeth into aggregated reviews as a source for advertising claims. The NAD started from “first principles.” Were consumer reviews “representative”? That is, was the data representative of “America”? The reviews that were used were sourced based on the question as to whether the consumer recommended the product or not. There were thousands of these reviews, tabulated on a quarterly basis across many online retailers and some brick-and-mortar stores. What these data offered in terms of volume, they lacked in scientific integrity, according to the NAD. It was apparently placed into the record that an overwhelming majority of vacuum cleaner sales occurred in brick-and-mortar stores, and yet the aggregated universe of reviews was largely skewed toward online retailers. Should it matter where a person purchased the product? The NAD thought so.
What did the advertiser know about the demographics? A lot, but apparently the fact that it was “self-reported,” as consumer reviews naturally are, was a virtually fatal flaw. The NAD found that this demographic information was “unverified.” The NAD also faulted the consumer reviews for not having verifiable insight into household income. The NAD, seemingly unclear as to whether or not consumer reviews could ever be representative in a verified and meaningful way, concluded that the advertiser had not been able to show in this case that the aggregated data were reliably representative.
In addition to problems with representativeness, the consumer reviews were just generally unreliable. In a fascinating statement, the NAD suggests that consumer reviews can be relied upon by consumers in making purchasing decisions, but that does not make them “reliable” for purposes of advertising substantiation. And, this is the key take-away from this case: The NAD wisely does not take issue with consumer reviews generally. (And the aggregator of the consumer reviews was never even mentioned in the case.) The problem is that when consumer reviews are aggregated across many Web sites and under various conditions and in various contexts, there is no way (currently) to ensure that these reviews are reliably tabulated and integrated for purposes of forming a reasonable basis for an advertising claim. The NAD did not say that it would be problematic to call out that on Amazon.com, the Shark has a 5-star average from consumer reviews. What the NAD decision stands for is the proposition that aggregation of data across platforms and sites must be shown to be reliably and scientifically performed in order to meet a basic threshold for adequate substantiation. It was the aggregation and the subsequent interpretation of the aggregation (“America’s Most Recommended…”) that really bothered the NAD.
(Query: what would the NAD think about awards and seals that are based on aggregated data?)
The story doesn’t end here, though. Euro-Pro has decided to take its case to the NARB. However the panel decides, this case is an important one because it is the first time anyone has really examined the increasing use of aggregated data for purposes of marketing claims.
Euro-Pro Operating, LLC, Shark-brand Vacuum Cleaners, NAD Case Reports #5717 (May 29, 2014).