“The risk reasonably to be perceived defines the duty to be obeyed, and risk imports relation; it is to another or others within the range of apprehension.”

As you know, Helen Palsgraf was on a train platform in the early 20th Century when a man, jostled in trying to board a train while carrying a package of fireworks, dropped the package, and the explosion made scales fall on her.  In denying her rights of recovery against the railroad, Judge Cardozo’s words above greased the wheels of the dumb mechanical networks and satanic mills of the first industrial revolution (belatedly, as law tends to do), where scales could fall on Ms. Palsgraf unforeseeably.  The railroad’s range of apprehension–ability to anticipate–then was limited, but how can law handle the information networks of this industrial revolution, which expand the range of apprehension as far as we want in almost any direction we want?   If we’re smart, we don a blindfold where we don’t need and want to know, and tell people about it, like Google not letting Glass do facial recognition.  Because “intentions are more valuable than identities” in big data economics, some companies are destroying data that enable or at least facilitate reidentification and protecting data against reidentification.   As the many types of possible inappropriate/impermissible discrimination associated with big data become more and more clear, these de-identification or pseudonomization efforts may in some cases have to give way to approaches that protect people by identifying them; for this reason, we can expect some traditional contrasts between US anti-discrimination law and European privacy law to continue in the big data economy.

Again and again, it is one of the most difficult questions for electronic communication networks and websites, and ever more so in the big data economy:  To what extent do we have an obligation to know–and then to act on–everything on our servers or in our cloud, to police the content that users enter, to take action to protect something or someone, to report violations?  What obligations does the “100% auditability” of big data create?   And the obligations keep growing.   A recent FTC investigation found a franchisor responsible for privacy violations of its franchisees in part  because the franchisees’ actions were documented on the franchisor’s email system, as how many franchisees’ actions are not?

A great example of what happens when we’re not thinking carefully about these consumer issues is the statement made by the automotive executive at a conference that  “We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.”  When his statement was reported, he immediately retracted it, stating that “We do not track our customers in their cars without their approval or their consent.”  (Note to consumers: Check your purchase agreements, leases, financing agreements and insurance policies/applications.)

Perhaps the best way to demonstrate now how much easier those consumer issues are to solve than the biggest big data foreseeability issues is to ask how much longer commercial airliners will be flying without universal GPS tracking after Malaysia Airlines Flight 370.  Even by 2008, there were more non-human nodes online than people; by 2020, there may be more than 50 billion non-human nodes online.   More importantly, when there is no link to direct consumer issues such as privacy or discrimination, the technology designer, manufacturer or user has more trouble donning a blindfold.  Imagine inspecting a factory or a construction worksite wearing Glass.  Thousands of pictures have been taken of everything, and have been uploaded into tools capable of predictive pattern detection as never before, as a scan with Glass of the  East New York Long Island Rail Road station in the 1920s might have revealed scales that would fall even if some fireworks exploded far away on the train platform.  Even if exposures could not have been detected, there are still all of those pictures, each potentially more damning than the “smoking gun” email in or out of context in today’s product liability litigation.

And what of the machines, buildings and other products created in the big data economy, sending off information about everything around them for their entire lives?   The scales themselves at the train station would be admitting their proclivity to fall 24/7, perhaps identifying elevated threat levels when packages containing fireworks entered the train station, themselves of course broadcasting 24/7.    Who will be responsible for being on the receiving end of such messages for the entire life of a building, a bridge or an airplane?  Is that the same entity that will keep the software updated and protected for that whole life?   The same entity responsible for incident response and safety?   Or, with no range of apprehension to limit the duty to be obeyed, will we allocate non-consensual duties in new ways?

Palsgraf is said to have been the most influential U.S. case of the 20th Century on non-consensual duties (torts).   FIPPs or no FIPPs, non-consensual duties look like they are multiplying exponentially in the emerging internets of everything, if only because there will be so many interactions with so many different types of intelligent objects in any moment as to make even the best efforts to create meaningful choice incomplete.  So it looks like we’re headed into another industrial revolution in which the individual may be swept up in systems over which they have no control (like the satanic mills), and therefore non-consensual duties will play a big role, both outside and within “trust networks,” in the areas of  both privacy and fairness.   We need norms for privacy and fairness, which is what the current White House big data initiative is about.   We can be so much better than we were in the Palsgraf era, though, because when we take off the blindfold, the probabilities of harms and benefits can be infinitely more clear now than they were then, and informed judgments about risk of harm can be the basis for non-consensual duties of privacy and fairness.