A Nobel Prize-winning psychologist and a legal scholar are working on a book about noise, but the noise they are focused on is not about sound; it is about randomness in decision making.
The Nobel Prize winner, psychologist Daniel Kahneman, was turned on to this noise while working as a consultant for an insurance company on the valuation of large financial fraud claims. What he discovered is something those of us in this area have told clients for years – the value of your claim depends, at least in the first instance, on the individuals tasked with evaluating it.
On Noise and Decision Making
Kahneman and his colleagues gave the same facts to 50 insurance underwriters at the same company and asked them to place a value on the potential claim. The insurance company’s management expected that their underwriters’ valuations would differ by 10 percent or so. What Kahneman found was that the underwriters’ valuations differed by 50 percent. It was closer to 60 percent when the experiment was replicated at another insurance company. These results caused Kahneman to conclude, as he indicated in a recent interview with economist Tyler Cowen, that the companies were essentially “wasting their time.”
In a Harvard Business Review article reporting on the results of this work, Kahneman and his colleagues wrote, “Replacing human decisions with an algorithm should be considered whenever professional judgments are noisy.” In his interview with Cowen, Kahneman went so far as to conclude that the idea we humans are needed for most decisions is an “illusion.” It seems even professional judgment is not immune to destruction at the hand of technology. We thought robots would destroy civilization in the future, but it seems only algorithms are necessary—a fate predicted by the 1983 movie “War Games.”
Fortunately for us humans (aka non-algorithms), Kahneman and his colleagues spared us by also concluding that replacing humans with algorithms “in most cases … will be too radical or simply impractical.” As a (presumably non-optimal) alternative, they proposed adopting procedures that promote consistency by ensuring that employees in the same role use similar methods to seek information, integrate it into a view of the case, and translate that view into a decision.
Presumably Kahneman’s upcoming book with legal scholar Cass Sunstein (author of “Nudge”) will have more advice on eliminating noise in professional judgments. Unfortunately, the book isn’t due out until late 2020 or early 2021. Let’s hope that the algorithms haven’t taken over by then.
Noise in the Legal System
Lawyers have, at least intuitively (more on intuition in a moment), felt Kahneman’s noise since the founding of our legal system. Having a single judge or a single arbitrator rule on a particular case is a harrowing experience for most lawyers and litigants. Our legal system is largely based on a jury system, which operates similarly to Kahneman’s interim solution for bridging the gap between algorithms and humans.
Jurors receive the same facts and instructions and independently translate those into a decision, which they then discuss in an effort to arrive at a unanimous verdict. This is why jurors generally are not allowed to discuss the case with each other while it is ongoing, or to collect outside information. Appeals operate in a similar way, with each branch of the appeal generally adding justices to the decision-making process—usually three justices at the intermediate appellate stage, five justices at the state supreme court level, and nine justices if the case reaches the U.S. Supreme Court.
The legal concept of stare decisis – or following prior precedent – also should help reduce noise. If an issue has been decided on roughly equivalent facts in a prior case in the controlling jurisdiction, the parties and their lawyers should be safe in assuming it will be followed again under the same or substantially similar set of facts.
But stare decisis works only where the facts are the same or substantially similar, and a lot of noise can seep in when comparing past and current fact patterns. Indeed, the underwriters in Kahneman’s insurance consulting work were experienced underwriters who presumably knew the precedent that would be applied to the cases they were asked to evaluate, but they came to substantially different valuations regardless.
Nevertheless, as Kahneman himself has advised in his other research, looking at how a similar thing turned out in other instances is useful in trying to predict how it will turn out in the present instance.
Intuition Best Backed by Research
Experienced lawyers and insurance claims adjusters will answer all of this by pointing to their unique experience and proclaiming that their intuition of the value of cases is correct. But, it turns out humans are poor judges of their own opinions and tend toward overconfidence.
Truly useful intuition develops only under certain conditions, identified by Kahneman and Gary Klein as (1) a high-validity environment with distinct rules (think chess); (2) a high level of experience in that environment (think Grand Master); and (3) rapid and unequivocal feedback (think winning or losing the chess match).
This is where jury research comes in to help save us from ourselves. As a trial lawyer, nothing is more humbling than watching and listening to a group of mock jurors discuss your case and your arguments from behind a two-way mirror. When performed correctly, jury research (or, as we often prefer to call it, “theme development”) should at least muffle the noise present in any legal case or insurance claim.
One of the primary goals of jury research is to identify the types of noise (or themes) that are likely to enter a case. It then seeks to see how those noises interact with each other. Of course, a lot of individual and group psychology creeps into this process that is beyond the scope of this post.
Analyzing the data and arriving at predictions from jury research also takes certain skills not necessarily possessed by trial lawyers. Phil Tetlock, in his research on forecasting, has noted that the best forecasters (what he calls “superforecasters”) are “cautious, humble, open-minded, analytical – and good with numbers.” He also recommends a team-driven approach governed by specific rules. Trial lawyers are trained in the law and the art of persuasion, not the elements of good forecasting. Good jury researchers provide these skills and help both in the evaluation of cases and the development of the themes most likely to resonate with the greatest number of people likely to hear the case.
It will be interesting to see the analysis and solutions offered by Kahneman and Sunstein and their impact on how we evaluate insurance claims and other disputes. In the meantime, parties and their attorneys should focus on the identification and reduction of noise through consistent procedures and proper forecasting techniques for successfully resolving insurance claims and other disputes. (That is, until the algorithms come for us.)