Complex litigation tends to get framed as a problem for the jury system, but it is more properly viewed as a problem for any fact finder—juror, judge, arbitrator, expert panel—and for the litigants and their attorneys.
Still, the jury framing is useful because it brings into focus some of the resources a fact finder needs to tackle the problem: attention, memory storage and retrieval, education and training, and life experience. In these respects, groups are advantaged over individuals, and experts are advantaged over nonexperts. Since judges have greater average expertise but juries act as groups, it is difficult to identify a net advantage either way. And, of course, accuracy is only one criterion by which we evaluate legal judgment; a full assessment requires considerations of efficiency, fairness, legitimacy, and community representation.
The task of studying the topic of complex litigation recapitulates the key features of the problem. Complex litigation produces a vast and gnarly multidimensional search space, yet legal fact finders and jury researchers alike attempt to draw inferences from only fragmentary glimpses of isolated regions of that space. As a result, legal fact finders and jury researchers each combine sparse data with inferences that go beyond the data given. Theory is always important in sociolegal research, but for this topic, it is essential if we are to say much at all.
This research paper presents a theoretical framework for evaluating expertise and collective decision making and describes the research done in this area. It also examines the types of complexity with respect to the number of parties and issues in a dispute and the amount and complexity of the evidence presented in the trial.
Theoretical Issues
Expertise
The typical jury is obviously far less expert than the judge in one key respect—expertise on the law as it pertains to the case. But because juries do not provide a rationale for their verdict, we only rarely know that a jury has made a “mistake” on the law, and juries may not feel particularly hindered by their lack of legal expertise. What may matter far more is expertise with respect to the technical issues that may arise at the trial, involving the economic analysis of market power, the engineering of heavy machinery, the etiology of a disease, or the epidemiology of toxic exposure. Here, judges may outperform the average juror; judges are above average in education and intelligence, and they may have relevant experience from past trials. But we shouldn’t overestimate either intelligence or experience. Studies of expertise show that it can take a decade or more of concerted effort to develop true mastery of a technical skill. Graduate students are highly intelligent and still struggle for months to successfully complete their more technical graduate courses. And today’s judges are likely to have far less actual trial experience than their predecessors of earlier generations. As a sample of the community, the jury may collectively have more relevant expertise in nonlegal issues than the relevant judge.
Groups as Information Processors
In the 1950s, Irving Lorge and Herbert Solomon deduced that, ceteris paribus, groups are better situated than their individual members to find correct answers. If p is the probability that any given individual will find the “correct” answer, then the predicted probability P that a collectivity of size r will find the answer is P = 1 – (1 – p)r. More recently, Lu Hong and Scott Page have derived theorems proving that cognitively diverse groups—defined with respect to the perspectives and schemas they use to tackle a problem—can outperform even their best members. But this model, like that of Lorge and Solomon, proves group competence, not group performance. Empirically, we know that performance often falls short of competence.
Both models hinge on a key premise: If at least one member finds the answer, it will be accepted as the collectivity’s solution—in short, “truth wins.” This can occur only if group members recognize the “correctness” of a solution once it is voiced. Unfortunately, there are two problems with this assumption. First, Garold Stasser and his collaborators have shown that not all relevant facts get voiced; group discussion tends to focus on shared rather than unshared information. Second, even when voiced, correct answers are not always recognized as such. At best, “truth supported wins”—at least some social support is needed for a solution to gain momentum, indicating that truth seeking is a social as well as an intellective process. But even that occurs only for some tasks. One such task appears to be recognition memory; research has shown that groups outperform their members on memory tasks. But for more complex inferential tasks, members need a shared conceptual scheme for identifying and verifying solutions. When they lack such a scheme, the more typical influence pattern is majority amplification, in which a majority faction’s influence is disproportionate to its size, irrespective of the truth value of its position. In other words, strength in numbers trumps strength in arguments.
In theory, collective decision making (or the statistical aggregation of individual judgments) is well suited for reducing random error in individual judgments. But bias is a different story. Biases can be produced by content—inadmissible evidence or extralegal factors such as race and gender—or by process, as when jurors rely on an availability heuristic (over-weighting what comes most readily to the mind), an anchoring heuristic (insufficiently adjusting away from an arbitrary starting value), confirmatory bias, or hindsight bias. Analyses by Norbert Kerr, Robert MacCoun, and Geoffrey Kramer suggest that under a wide variety of circumstances, collective decision making will amplify individual bias rather than attenuate it. The collective will tend to amplify individual bias when there is “strength in numbers,” such that large factions have an influence disproportionate to their size, as will occur explicitly in a “majority rules” system and when the case at hand is “close” rather than lopsided. A case can be close for several reasons, and each may pose different challenges for the fact finder. Facts can be ambiguous and vague; they can be clear but may contradict each other; or they can seem clear to each perceiver, but the perceivers may disagree on which side the “clear” facts support. The latter is particularly likely in an adversarial setting, where jury factions may form favoring each side of a dispute.
Defining Complexity
In 1987, Robert MacCoun postulated a preliminary taxonomy of three basic categories of complexity: dispute complexity (the number of parties and number of issues in a dispute), evidence complexity (the quantity, consistency, and technical content of evidence), and decision complexity (the complexity of the law and the complexity of the inferential steps and linkages required to render a verdict). In the 1990s, Heuer and Penrod conducted the first systematic statistical analysis of trial complexity in a field study of 160 criminal and civil trials. Judges were asked to rate the trials on a wide array of attributes. Factor analyses suggested three underlying dimensions, roughly overlapping MacCoun’s categories: evidence complexity, legal complexity, and the quantity of information presented at trial. As in earlier work, it was found that judge ratings of complexity were unrelated to judge-jury agreement rates.
Both analyses treated quantity as a problem for the fact finder. On reflection, that doesn’t necessarily follow. Large trials are extended over long time periods. Citizens who are able to track the plot complexities of soap operas such as All My Children or the team lineups of the NBA clearly have the resources to track large arrays of factual data. Indeed, inductive inference often gets easier with additional data, not harder. What probably matters more is the internal structure of the evidence—the inconsistencies and contingencies and interdependencies.
Evidence Complexity
We know very little about evidence complexity in the trial context, but there are much larger bodies of research on deductive and inductive inference in non-legal tasks. In approaching this literature, it is useful to keep two distinctions in mind. One is between the two criteria for validity: correspondence versus coherence. Correspondence considers whether our inferences match the empirical facts; coherence considers whether our inferences “hang together” in a manner consistent with the normative standards of deductive logic, Bayesian updating, and the like. The second distinction is between competence and performance. Competence describes what we are capable of achieving; performance describes what we actually achieve. A disproportionate amount of work has been done in the “coherence/performance” cell. We know that people routinely violate normative inference standards for even fairly simple tasks, and they do so systematically rather than randomly, through the use of heuristics. But various lines of evidence from the other three cells suggest that people—and honeybees, birds, and other organisms—are competent to perform inferences of remarkable complexity and sophistication in some settings. This work suggests that competence may exceed the performance we often observe and that the structure and sampling of evidence (and the match of data to our specific competencies) may be what closes that gap. So the applied challenge is to discover ways of restructuring fact-finding procedures to bring performance closer to competence.
David Schum (using a Bayesian perspective) and Nancy Pennington and Reid Hastie (using narrative schemas or “stories”) have done much to elucidate how the internal structure of evidence gets cognitively represented and analyzed by fact finders. (Much of this work has been collected in Hastie’s edited volume Inside the Juror.) Schum’s work shows that people can sometimes perform better when tackling small, piecemeal inferences rather than larger, more global inferences. Pennington and Hastie show how the temporal ordering of evidence at trial can facilitate (or interfere with) fact finders’ ability to form coherent narratives. Unfortunately, the adversarial setting poses difficulties very different from those one might encounter when mastering skills such as reading or learning to use a computer program. Evidence structures aren’t neutral; some favor one litigant at the expense of another. Indeed, lawyers with weak cases may even seek to undermine clarity.
Highly technical evidence involving statistics, chemistry, engineering, or economics poses additional problems. The amount of time experts spend in explaining highly technical concepts at trial falls well short of the time one spends learning in a semester-long course (without prerequisites!), though it still greatly exceeds what we can usually simulate in a mock jury experiment. Nevertheless, it seems likely that fact finders rely heavily on heuristic cues (“lots of charts,” “sure looked smart”) to compensate for their limited understanding of the material. Thus, Joel Cooper and his colleagues found that jurors were influenced by the content of expert testimony on the medical effects of polychlorobiphenyls (PCBs) when it was relatively simple but relied on the witness’s credentials when the testimony was complex.
Dispute Complexity
Much of what we know about how juries handle dispute complexity comes from an important program of research by Irwin Horowitz and his collaborators. They found that mock jury verdicts are systematically influenced by the size and configuration of the plaintiff population. Aggregating multiple plaintiffs into a single trial appears to increase the likelihood that the defendant will be found liable, but each plaintiff’s award may be smaller than in a consolidated trial. There are similar trade-offs involved in trying all the issues together versus bifurcating (or trifurcating) the trial into segments addressing causation and liability versus compensatory versus punitive damages—unitary trials may increase liability but lower damages. These effects are not neutral with respect to the parties, but bifurcation may be justified on procedural grounds because it appears to improve the quality of the decision process.
Conclusions
If judges clearly outperformed juries as legal fact finders in complex cases, we would face a dilemma. But we don’t. Neither theory nor research indicates that judges are superior to juries in complex cases; it is safe to say that both need all the cognitive help we can give them to cope with an increasingly complex world.
Research on complexity suggests that jurors may be better able to cope with complexity if they are encouraged to use the same strategies used by students who take notes and ask questions in class. Although the cognitive advantages of treating fact finders like active information processors may seem obvious, some attorneys and judges are reluctant to cede control over the case, in whatever small measure. But research shows that while these innovations help only modestly, they also do little or no observable harm.
References:
- Cooper, J., Bennett, E. A., & Sukel, H. L. (1996). Complex scientific testimony: How do jurors make decisions? Law & Human Behavior, 20, 379-394.
- ForsterLee, L., Horowitz, I. A., & Bourgeois, M. (1994). Effects of note-taking on verdicts and evidence processing in a civil trial. Law & Human Behavior, 18, 567-578.
- Hastie, R. (Ed.). (1993). Inside the juror. Cambridge, UK: Cambridge University Press.
- Heuer, L., & Penrod, S. (1994). Trial complexity: A field investigation of its meaning and its effects. Law & Human Behavior, 18, 29-51.
- Horowitz, I., & Bordens, K. S. (2002). The effects of jury size, evidence complexity, and note taking on jury process and performance in a civil trial. Journal of Applied Psychology, 87, 121-130.
- Kerr, N., MacCoun, R. J., & Kramer, G. (1996). Bias in judgment: Comparing individuals and groups. Psychological Review, 103, 687-719.
- Lempert, R. O. (1981). Civil juries and complex cases: Let’s not rush to judgment. Michigan Law Review, 80, 68-132.
- MacCoun, R. J. (1987). Getting inside the black box: Toward a better understanding of civil jury behavior. Santa Monica, CA: RAND.
Return to the overview of Trial Consulting in Forensic Psychology.