NO SURPRISES!
intoned the digital advertising guru at the IAPP Global Privacy Summit, preaching the conventional wisdom of the business community learning to live under emerging privacy rules based on context-sensitivity, as the young Barack Obama learned — perhaps to our eventual disadvantage — that best way to become and remain successful while black in America was to “make no sudden moves.” “Transformational leadership” in this world takes us through a gradual metamorphosis, as a caterpillar predictably enters a chrysalis and emerges as a butterfly, or a frog just sits there as the water gradually heats up, and is boiled before it notices the heat. The problem — or in the case of the frog, the opportunity — with this kind of transformation is that not only do we live in technological and geopolitical worlds of discontinuous disruption almost everywhere we look; human beings have never metamorphosed into bugs except in ways that have been deeply unsettling….
Ovid’s Metamorphoses mocked the disruptive passions of the Gods themselves, and are a great read for any privacy, security or information governance professional, particularly before grappling with the disruptive transformation of the very deepest 20th Century passion into which I am about to dive. Is it Ovid’s amor or ira?
Not exactly. Rich Corinthian leather?
This is the second in a series of posts on how to take on one of the biggest questions for the Internet of Things (IoT): HOW can we achieve real security and meaningful consent in a swarm of millions of tiny, re-programmable devices, made by new market entrants, changing each other as they interact? In the first of these posts, we offered for your consideration the concept of a platform as an organizing principle for many aspects of the IoT that intersect consumers, deliberately without telling you much about what a platform is, beyond the 21st Century condensation symbol of the smart phone. Now, my friends, the rubber meets the road, as we drive headlong into the one area of our 20th Century lives over which we were led to believe — all mortality statistics to the contrary notwithstanding — that we exercised some control: our cars.
Cars are SO about control that we wouldn’t even have the word “dashboard” without them. (Ovid was all over both this conceit and its tragic end, by the way; see, e.g., what happens when Phaethon asks his dad for the keys to the solar chariot to impress his friends.) What happens with self-driving cars is that our zone of perceived control is diminished and shifts — to infotainment and, I will suggest, to privacy. The infotainment and privacy dashboard of the future may be no more satisfying to real drivers than a toy dashboard on a toy car, but the Alliance of Automobile Manufacturers, an alliance of 12 top car and light truck makers, is already working hard on the rules:
And well they might, because state legislatures are already pulling the data from automotive black boxes and other recording devices away from the manufacturers except for servicing the vehicles, as the New Jersey General Assembly did unanimously last Thursday. Note to NY Auto Show attendees next week: To adjust your car’s privacy settings, you can use the George Washington Bridge, the Holland or Lincoln Tunnels, or any of the three other fine Port Authority bridges. And of course you can rest assured that NJ government’s continuing access to the data for “traffic management” purposes would never be abused…
So I am suggesting that the platform of control for privacy needs to be in the hands of the person formerly known as the driver (Ooh, that’s so irritating, I’ll just keep repeating it as “PfkatD”). The natural physical manifestation of that platform of control is, of course, the dashboard, but the dashboard could itself exercise the control or could merely communicate instructions to a phone or something else; the need for a dashboard is the need for boiling down the choices relating to thousands of components interacting with one another to a few salient choices.
For data security, on the other hand, neither the dashboard nor the car need to be platforms under the perceived or actual control of the PfkatD. Now, before you get all upset with me, I’m talking data security; whether the thing has brakes and emergency steering or not is for another post by somebody else. To be effective — and, indeed, to prevent automotive safety issues — cybersecurity has to have faster reflexes than Dale Earnhart, Jr. It needs network-wide, high-velocity monitoring, interventions, and responses to and recovery from incidents. And it will come as no surprise that there are a tremendous number of standards and best practices. One good summary of those standards and practices for carmakers is Josh Corman’s 5-point “cyber-safety” plan. So the carmakers have their work cut out for them in assuring those standards and practices are met for the complex supply chains under their control. And in some senses the automaker itself becomes the platform of control, imposing standards met by the software developers and component manufacturers retained by it, as well the network within the car, not subject to change by the PfkatD. But what about the bigger network? Is there a Ground Traffic Controller looking at a cybersecurity dashboard for the platform of a local, regional, national or global network of connected cars?
Whatever the answer, the dashboard in the car will not give the PfkatD even as much control over cybersecurity as over driving the self-driving car. The displacement of the controlling myth of automotive control, together with the strong advocacy of privacy from the tech industry that 15 years ago declared it dead, are reasons I expect to see some serious privacypreferences offered on the dashboard of the self-driving car. Stick that in your chrysalis and metamorphose it!