Earlier this week, we posted on the Ninth Circuit’s conversion of the Daubert’s gate (that the trial court should keep) into more of a swinging saloon door. A week before the Ninth Circuit ruled that a trial court had erred in excluding unreliable causation testimony (and granting summary judgment as a result), the Third Circuit had affirmed a trial court’s exclusion of unreliable causation testimony (and grant of summary judgment as a result). Even though we are discussing In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., __ F.3d __, 2017 WL 2385279 (3d Cir. 2017), second, it really is a bigger deal because it reaffirmed the end of an entire MDL.
We followed the district court’s Daubert rulings on the epidemiology and mechanism experts offered for all the plaintiffs. We watched in amazement as the plaintiffs got to try again and still could not offer reliable expert testimony on general causation. With our typical restraint, we applauded the court’s subsequent decision that no plaintiff could make out a case for general causation between maternal use of the drug and the cardiac birth defects claimed without the excluded experts and that was fatal to their claims. We found that the plaintiffs, maybe because of the sympathy associated with their claimed injuries, got plenty of leeway before the court determined that there was simply no there (i.e., good science) there. (Along the way, we saw that Pennsylvania and West Virginia state courts came to similar conclusions.)
The appeal to the Third Circuit focused on whether the biostatistician offered as a back-up expert on epidemiology was properly excluded, with plaintiffs conceding that they should have lost if he was. Plaintiffs’ central contention was that the district court created a standard that requires general causation opinions to be “supported by replicated observational studies reporting a statistically significant association between the drug and the adverse effect.” We think that standard, similar to Havner and Daubert II, is a fine standard, but the district court did not create or apply such a standard in knocking out the biostatistician. Likewise, the Third Circuit declined to “state a bright-line rule” that “statistical significance is necessary to prove causality.” (We think it is, because the Bradford Hill Criteria, which the biostatistician purported to apply, starts with an association demonstrated through epidemiologic studies. We will try to resist arguing for the tighter standard given the result.) The district court considered the lack of multiple statistically significant studies supporting an association to be contrary to what teratologists generally require and thus relevant to whether an opinion without such support was unreliable. A flexible approach to evaluating the reliability of a general causation opinion was fine with the Third Circuit and its reading of the Bradford Hill Criteria. (There is flexibility, but only when there is an association from epidemiologic studies as a predicate. OK, we will have to try harder.)
The Third Circuit “accept[ed] that the Bradford Hill and weight of the evidence analyses are generally reliable. We also assume that the ‘techniques’ used to implement the analysis (here, meta-analysis, trend analysis, and reanalysis) are themselves reliable.” That assumption is dicta—which is a good thing—because the court concluded that the biostatistician did not reliably apply the methodology or techniques that he claimed to be applying. First, he gave lip service to analyzing “multiple positive, insignificant results,” but he really just eyeballed trends. Second, his trend analysis was based on cherry picking and inconsistent application of basic statistics principles. Third, his meta-analysis was also result-driven, as he could not justify why he included some studies and excluded others. Fourth, his reanalysis was done for no reason but to conclude that a published study reporting no association should have found one. Altogether, “the fact that Dr. Jewell applied these techniques inconsistently, without explanation, to different subsets of the body of evidence raises real issues of reliability. Conclusions drawn from such unreliable application are themselves questionable.”
The court probably could have stopped there. It went on to detail how the biostatistician’s purported application of Bradford Hill was riddled with errors that he could not explain. This was more than enough to conclude that the district court had not abused its discretion in excluding the expert.
Along the way, however, it noted that it may be possible to have a reliable reanalysis that draws a different conclusion than the original published study and that an expert can make unsupported assumptions in connection with doing an “informational” reanalysis. It offered that “[t]hese inquiries are more appropriately left to the jury.” We disagree and think the broader context has to be considered. A plaintiff’s expert offered on the epidemiologic evidence who cannot offer a reliable opinion that there is an association between the exposure and the type of injury the plaintiff claims, let alone that there is a causal relationship, should not be talking to the jury about anything. A plaintiff’s expert offered on the epidemiologic evidence who can offer a reliable opinion that there is a causal relationship between the exposure and the type of injury the plaintiff claims can be allowed to discuss the various analyses she did to form that opinion. And the defense can cross-examine her on whether some of her analysis was result-driven for-litigation drivel or based on unsupported assumptions. A jury can hear that sort of back and forth and decide what weight to give to the expert’s testimony on general causation. However, no trial court should abrogate its gatekeeping role and let juries hear about reanalysis of published studies unless plaintiffs have reliable evidence of general causation in the first place. I guess we prefer the opinions of the district court, which took its gatekeeping seriously, even if it let plaintiffs take a few shots at entry.