Data is everywhere. By some estimates, 90% of the world's data has been created in the last two years. This deluge is only going to intensify as the Internet of Things (IoT) is expected to add somewhere between 20 –50 billion connected devices.

If our current phase of the information revolution is marked by Big Data, then the next phase will be defined by the addition of Artificial Intelligence (AI). The only way the massive amounts of data being generated will be capable of being utilized will be through the power of AI. Data has long been the coin of the realm for modern businesses, but in this new era of AI combined with Big Data, the data a business collects will become virtually priceless. AI is expected to have a tremendous impact in the healthcare industry, and the opportunities offered by the confluence of these three trends—Big Data, AI and IoT—come tempered with regulatory concerns about data collection, use and security.

Brief Overview of Data and AI

The rapid advances in AI over the past few years are the result of advances in "neural nets" in computing. Data inputs—for example, images of human faces—are run through several levels of analysis. Multiple analyses are performed at each level. (The number of levels and the number of operations at each level can vary widely.) At each level and within each analysis performed at that level, each image is compared against a known master set of data, such as a storage bank of passport photos. If a query is made, for instance, asking if an image is male or female, at the conclusion of the analytical process, the computer gives its best guess about the image’s gender.

As you can see, data is critical in two regards. First, a master set of data is essential for the process to work. Second, the most powerful aspect of AI is its ability to learn. Accordingly, when it processes a picture of a girl and says it's a boy, a human can correct this. Then, the next time AI analyzes an image, it is much more likely to make a correct determination of the image’s sex.

This user-generated data is today, and will continue to be, immensely valuable. Unlike a commercially available master set of data, the data is proprietary to a specific company. Large amounts of data not only allow companies to offer better AI-powered services, but also to add to their bottom line assets.

Regulatory Concerns and the FTC’s Body of Privacy Law

If user data is key to the effective combination of Big Data and AI, then companies have to make sure that the data is collected properly. In the world of the IoT, this poses interesting challenges.

For example, how does a business provide a hyperlink to a privacy policy for the tracking chip placed in an aspirin bottle? The data collected on the use of that bottle could be helpful for sending a user a reminder to buy more and, combined with other data, to determine if a change in diet is needed or there is a more serious issue that necessitates a doctor's visit.

What's the regulatory regime at play here for maintaining this data? Is it the Health Insurance Portability and Accountability Act (HIPAA)? The Food and Drug Administration?

We’ve addressed HIPAA's scope extensively in previous articles. (See “Unleash Social Media’s Engagement Power While Protecting Consumer Privacy” and part 1 and part 2 of “The FTC and Patient Privacy.”) Many of the actors involved in healthcare marketing, however, such as data brokers, ad agencies, marketing clouds, and developers of health-related IoT devices, may not fall within its ambit. Accordingly, this raises issues as to the applicability of the Federal Trade Commission (FTC) Act and the FTC's body of privacy law, which in turn is relied upon extensively by the attorneys general (AGs) of all 50 states and the District of Columbia.

The bedrock of the FTC's privacy jurisdiction has centered on the notice and choice paradigm. When visiting a web site, for example, consumers are essentially deemed to have read and consented to the host site's privacy policy. Consumer awareness of the value of privacy and the use of their private data has been shown to be less than impressive, however.

For example, a 2015 survey by the University of Pennsylvania shows that 65% of consumers incorrectly believe that when a website has a privacy policy it means the company will not share their information with other websites and companies. The same study shows that most consumers are not aware of the privacy implications of companies selling data regarding food purchases and over-the- counter drugs. As data is going to be taking on value far beyond what we have previously seen, it will be essential for companies to gain consumer consent to obtain, use and share data with other businesses and successor companies.

For healthcare marketing, the FTC and state AG enforcement authority has long been established. The vast amounts of data that will be collected through IoT devices will allow for detailed consumer profiles to be created, as well as for specifically-timed marketing events when consumers are known to be at a certain location or conducting a specific online search.

Native advertising has been an effective marketing tool in the past few years. The key goal of native advertising is to present marketing material in a seamless, non-interruptive fashion within related content. The critical consumer protection issue is to ensure that the advertisement does not appear to be independent editorial content. In a healthcare setting, these concerns are manifestly apparent. It is critical to avoid the possibility that a consumer will take an action that is counterproductive to his or her health or recovery because of the mistaken perception that an advertisement is an objective article.

Key Takeaways from the FTC’s Report on Big Data

It is worth revisiting the FTC's report on Big Data from last January. In assessing Big Data, the FTC had four key takeaway issues:

  • How representative is the data set?
  • How does the data model account for biases?
  • How accurate are the predictions from the data?
  • How are ethical/fairness concerns addressed?

When we examine the AI process, we see that running inputs against data sets to determine accuracy raises easy-to-spot concerns for the healthcare industry. For example, the AI analysis needs to account for any relevant socio-economic biases in the input data being considered, as well as in the master data set.

The FTC’s Big Data report also referenced the potential uses of data in financial situations, such as through the Fair Credit Reporting Act. As applied in a healthcare context, any health-related information gathered from a consumer and otherwise lawfully shared (e.g., HIPAA exempt) with a financial institution which, due to the health habits of the consumer denies a loan or imposes a higher interest rate, could still bring legal complications.

Another twist is AARP’s recent lawsuit against the Equal Employment Opportunity Commission (EEOC) for offering a plan that allows employers to offer health insurance discounts to employees engaged in wellness plans. The AARP’s concern is that this penalizes employees who wish to keep their health data private.

Tension Increasing as AI, Big Data and the IoT Reach Maturity

In coming years, there will be considerable tension as AI, Big Data, and the IoT reach maturity. Years ago it was demonstrated that armed with only a birthdate, zip code and gender, the "anonymized" health data of then Governor William Weld of Massachusetts was correctly identified out of a pool of Massachusetts state employee data that had been released for research. With the power of AI, the challenge to effectively anonymize data will be that much greater.

Retention and purpose issues also will be a key concern. There have been multiple data breach cases that involve data that had not been used for over five years—or in some cases, had never been used at all. In addition to the time and expense of dealing with the regulatory agencies in such matters, the brand damage was considerable when the breach notifications went out to consumers who had not engaged with the businesses in years.

As detailed above, user data will be key for realizing the full potential of AI. Concerns about retention will have to be balanced against the astounding breakthroughs in healthcare delivery already underway. In mere seconds, AI can make diagnoses and suggest treatments based on a volume of medical research that no doctor could possibly have enough time to digest. Throw in the latest apps that allow consumers to provide their symptoms to an AI powered "nurse" and “doctor,” and the costs associated with medical “visits” can decline significantly.

This past summer the FTC issued a decision in the LabMD case which held that releasing sensitive health information alone— without any financial information—could qualify as harm under the FTC's unfairness standard. Rather than its usual enforcement workhorse, the deception standard—which focuses on misrepresentations by businesses—the FTC used the unfairness doctrine to hold LabMD liable for the negligent retention of sensitive medical data, absent statements or representations from LabMD to consumers explaining how that data would be maintained. This case is currently working its way through the courts and is one that has tremendous impact for IoT devices.

In aggregate, connected devices have the potential to paint extraordinarily detailed portraits of consumers. As sensitive as these portraits may be, their ability to have a tremendously beneficial impact on the course of healthcare is limitless. While there is much uncertainty as to what exactly the Trump administration will do, one phrase that has frequently appeared is "regulatory humility." This Administration and Congress will need to perform quite a balancing act between protecting consumer privacy and improving health and welfare through data-driven healthcare.