Jurors and judges sit in court and evaluate credibility. They continuously assess who is telling the truth and who isn't. But what is the bias in those determinations? Lie detection itself is a notoriously uncertain ability, with confidence often high, but with actual ability tending to hover more around the coin-flip level. But independent of accuracy, our beliefs about lie detection can tell us something about bias. Based on some recent research, it tells us something about racial bias and, more specifically, about the bias we bring to the task of telling whether witnesses of a different race are telling the truth or not.

The research (Lloyd et al., 2017) appears in the journal Psychological Science and is covered in a ScienceDaily release. The article is entitled "Black and White Lies: Race-Based Biases in Deception Judgments," and reports on a series of experiments involving 605 research participants. The participants watched videos of Black and White individuals, some telling the truth and some lying. As they watched the videos, two boxes appeared on the screen: "Truth" and "Lie," and participants simply made a judgement and clicked the appropriate box as they watched. In some versions of the study, the monitors were equipped with eye-tracking technology so that the researchers could tell which box the participants focused on first and foremost before choosing one to click. After watching videos, participants completed a survey on their attitudes toward fairness and prejudice, rating their level of agreement or disagreement with statements like, "It is important to my self-concept to be nonprejudiced toward Black people." The results of the study carry two important implications for lawyers trying to identify or adapt to biases in the courtroom: Bias statements aren't necessarily reliable and can be prone to overcorrection.


The first finding of the study was that, when it comes to saying who is lying, a White person is more likely to identify another White person rather than a Black person. As the lead author, E. Paige Lloyd of Miami University, notes, "In our research, we document that White perceivers actually selected the truth response more for Black targets than White targets in a lie detection task, suggesting that they are overcorrecting for their anticipated racial bias." This bias, she says, stems from a motivation on the part of many of the White respondents to respond without prejudice: Those showing the highest levels of agreement with the survey items stating a desire to avoid racial bias, actually showed the highest levels of overcorrection. In other words, their effort to be unbiased against Blacks translated into the effect of being biased in favor of Blacks, and this effect is most pronounced among those trying to be the most unbiased.


One implication is that the idea of overcorrection of bias is a reality. Someone who is trying very hard not to discriminate against one party may actually end up favoring that party. An awareness of bias and promise to correct doesn't solve it, and may actually contribute to it. Court instructions and counsel's own voir dire in focusing repeatedly on a potential bias and telling jurors not to let it influence them, might play a role in inducing the jurors to subtly favor the party that they would otherwise be biased against. One good practice, and a good focus for additional research, might be to explicitly address the risk of overcorrection when it could hurt you:

There is also the possibility that a juror, in trying hard to avoid bias, might end up bending over backward to show that they aren't biased, and actually end up favoring a party for that reason. Do you believe that you could be sensitive to and avoid that kind of overcorrection in this case?


Remember that the study had two ways of looking at who was believed and who wasn't: Whether they clicked "truth" or "lie," and using the eye tracking, which one of those two boxes subjects looked at. We know from past research that the box that is looked at first and most is generally the box that will be clicked. But while White research subjects were more likely to say the Black speaker told the truth, their eye movements tended to tell a different story: White subjects were more likely to quickly fix on the "lie" response box while observing a Black speaker. That inconsistency between what they said and what they did led the authors to conclude that "the Whites' truth bias for Black targets is likely the result of late-stage correction processes." In other words, their first thought was to think it as a lie, then they thought better of themselves and decided to declare it to be true. According to E. Paige Lloyd, "White perceivers initially show an impulse to call Blacks liars, but then overcorrect for this later in the judgment process."


If our impulse differs from what we say, then the research subject who checks his impulse in a study might also be the juror who checks his impulse during deliberations. After all, a person who wants to avoid the perception of bias is likely to carry that motivation into the formal setting of the courtroom as well. But, more broadly, the research results provide yet another reminder to not confuse what people say with what they do, and to not rely on self-diagnosis as a way to uncover bias during voir dire. If people have an impulse to distrust a witness of another race, as Whites did in the study, that impulse may or may not be expressed. It may even be repressed and overcorrected. But if the impulse exists, it can still creep into the case in other ways. Ultimately, it pays to be humble about the chances of correcting a real bias in a legal context, and to rely, not on straightforward admissions of bias, but on deeper dives into a person's makeup (like a social media analysis) in order to uncover the whole picture.