Click here to view image.

We know the expression, "You can lead a horse to water, but you can't make him drink." Maybe the version for trial lawyers is: "You can lead a jury to evidence, but you can't make them think." Of course, they'll do their thinking on their own, and the whole point of having advocates is to influence that thinking. For consultants running a mock trial, however, their role is a little different: The focus during the mock trial is on understanding, measuring, and assessing...not influencing. The advocates on both sides aim for influence, but the researcher's methods and questions should try to avoid creating additional influence. And usually, we do. But there is one very common practice among consultants running focus groups and mock trials -- something we don't do, but other consultants commonly do -- and that involves giving the mock jurors a list of arguments for both sides and asking them to agree or disagree with those packaged and simplified conclusions. At several points throughout a slip and fall mock trial, for example, the jurors will get a list like, "The stairs were poorly maintained," "The plaintiff should have been paying greater attention," "The missing tread tape was an accident waiting to happen," or "It is negligent to not use a handrail." Asking for the mock jurors' level of agreement or disagreement with that list of arguments can yield some very attractive data. Consultants and clients like it because it seems like you are getting a fine-grained read on what arguments work and what arguments don't work. 

But there is one real problem with that method: Those questions are leading. "Leading" is the legal word, the social science version is "priming," but it is the same idea. The questions aren't just measuring attitudes that jurors may or may not hold, they are suggesting attitudes for mock jurors to adopt. Particularly when they're presented repeatedly at several points in a mock trial, this comprehensive list of arguments doesn't necessarily tell jurors what to think, but it certainly tells them what to think about. The mock jurors are still free to agree or disagree, of course, but they're also likely to adopt those arguments as a starting point, and that sets aside what I think should be one of the main purposes of a mock trial: to see what emerges from the jurors as the most important arguments. When we feed them a list of arguments between presentations, we are left wondering whether they would have come up with the same arguments and framed and prioritized them the same ways if they were left to their own devices. In this post I'll take a look at the reasons to avoid, or at least minimize, those priming questions in mock trials, and will also answer some of the likely responses that proponents of the technique would offer. 

Reasons to Avoid (or at At Least Minimize) Priming Questions

Jury researchers should try to avoid priming questions in order to avoid creating, or influencing, the attitudes they are trying to measure. There are two main reasons why suggesting arguments followed by an "agree or disagree" question introduces bias. 

The Questions Set the Context (The “Assimilation Effect”)

Providing the mock jurors with a clear list of statements explicitly suggests that, “These are the important arguments.” We know from research on what is called the "assimilation effect" (Bless, Fiedler & Strack, 2004) that this sets a context that influences later choices. They are likely to agree, for example, that these are indeed the important arguments, and to at least treat them as a default starting point. But that does not mean they would have had the same starting point if they were on their own. 

The Questions Encourage Agreement ("Acquiescence Bias”) 

If you frame a question as a statement, then you are increasing the chances that people will agree with it. This is called "acquiescence bias" (Simon, 2008), and according to data wizard Jon Krosnick at a recent conference of trial consultants, that can end up averaging around 13 percent. What that means is that if you ask a question one way (e.g., The stairs are an unreasonable risk) and you get 60 percent saying "yes," then you would expect that if you flipped the question (e.g., The stairs are not an unreasonable risk) you would get the same 60 percent saying "no." Only you wouldn't. Instead, you would get as much as 13 percent of your sample agreeing with both statements. So the form of the question slants the answer pretty heavily. 

Answers for Those Who Like to Use Priming Questions

Those who run mock trials, at least the ones with a social science background, know what priming is and they know that these questions carry some risk. But I believe that they would defend the use of priming questions in a few ways. Here's my answer to a few of those defenses. 

They Say, the Questions are Balanced for Both Sides

Correct, they will usually try to include as many pro-plaintiff arguments as pro-defense arguments. But that is not the answer. Even if you are priming both sides' arguments, you are still priming. In addition, if jurors were left to their own resources, they would probably come up with more arguments for one side than the other. So, arming the two sides with equal numbers of arguments can only provide artificial help to the weaker side, and that can convey a false equivalence of the positions. Imposing a balance in the arguments fed to the jurors through the questionnaire just creates another problem. 

They Say, We’re Just Asking About Arguments Already in the Presentations

Yes, the whole point of a mock trial is to get reactions to realistic presentations from each side, and those include the arguments you plan to make and expect the other side to make. But there is a difference between testing the natural reactions that mock jurors have to an extended summary argument, and feeding the mock jurors a list of arguments that you believe should stand out from those presentations. Embedding the arguments in the questionnaires is different for three reasons: One, you are selecting the arguments, not them; two, you are repeatedly emphasizing those arguments after each presentation; and three, the mock jurors are participating by committing to agreement or disagreement with each. All of that adds to the power of the priming. The bottom line is that you don’t know if those are the arguments jurors would have picked, or if jurors would have expressed them in the same way. 

They Say, The Data is Really Useful

Yes, I do admit, the result is that you have some very pretty data: all jurors weighing in across time on your list of all the important arguments. That is really useful, but it comes at a cost of losing information that is also really useful. The information you lose is an answer to the question of what they would come up with on their own. I'd argue that this is probably the most useful outcome of a mock trial. 

And the kicker is that you can get the argument-agreement information in other ways, without priming.  Instead of stating an argument and asking mock jurors if they agree or disagree, ask an open ended question like, "What is the most important reason for your leaning at this stage?" and then conduct a content analysis on the results. That way, your identification is based on what the jurors came up with, and not you. And if you really want a reaction to a specific argument, ask for that reaction at the end of the project. By waiting until the final post-deliberation interview, you avoid suggesting and emphasizing the argument, which potentially influences the results. 

Ultimately, you are better off not leading your mock jurors to water. Just watch them and see if and what they drink.