Kuhn v. Wyeth, one of the many hormone replacement therapy (HRT) - induced breast cancer claims unleashed after one arm of the WHI revealed that the risks of HRT outweighed any benefits, is another case in which a court has concluded that a reasonable though untested hypothesis is good enough for Daubert purposes.

The WHI was the first randomized controlled trial (RCT) to examine the effects of HRT on breast cancer risk and it remains the most powerful study yet to address the question. Kuhn's problem was that this central pillar of the litigation showed that there was no increased risk of breast cancer for short term (<3 yrs) users like her. Plaintiff's expert thus needed to find a way around the WHI study while preserving its main finding.

With the help of plaintiff's counsel her expert rummaged through three observational studies and identified data which, when considered together, are indeed suggestive of an increased risk at exposure durations less than three years. However, 1) none of the studies were RCTs; 2) the largest (The Million Women Study) is laden with potential biases and confounders and doesn't even prove that HRT causes breast cancer; 3) the Calle study relies on self-reporting of past use (a notorious source of bias) and anyway didn't even find much of a risk increase until after the third year of use; and, 4) the final study, by Fournier, et al, explicitly disclosed that "[w]idespread use of progesterone is a French peculiarity" such that the drug combination being primarily studied wasn't even the one Kuhn blamed for her breast cancer. After a hearing on a motion to exclude the magistrate judge deemed the testimony unreliable. On appeal the 8th Circuit reversed.

The court held that "[t]aken together, the Calle study and the foreign studies constitute appropriate validation of and good grounds for" plaintiff's expert's opinion. There's no discussion of what constitutes "appropriate validation" of an expert's opinion nor the "good grounds" for holding it but it appears that plausibility was the only test being applied. What the court did was to examine those subsets of data within the three observational studies relied on by the expert, along with his explanations as to how they fit together, and thereafter conclude that the bits of data as framed by the expert do indeed support the theory of an increased risk for short-term users. And it's actually a plausible hypothesis. But is a plausible hypothesis the sort of "scientific knowledge" that satisfies Rule 702? We don't think so.

There's a whole journal dedicated to clever medical hypotheses - each hypothesis typically resting upon a lot more than just bits of three articles. The journal exists to "publish papers which describe theories, ideas which have a great deal of observational support and some hypotheses where experimental support is yet fragmentary" and the ideas submitted for publication are first "reviewed by the Editor and external reviewers to ensure their scientific merit" and reviewers judge the papers based on their plausibility. And even after all that what gets published isn't "scientific knowledge". What's published are just ideas;  potentially big ideas; yet nothing more than interesting ideas, until they're put to a test - a severe test - and survive it. That's science, and only the ideas that survive its testing can claim to be scientific knowledge.

As for the law, in a courtroom in Arkansas an expert witness will be allowed to advance as scientific knowledge a medical hypothesis conceived by a lawyer and fleshed out by the expert in just 5 hours. Scientific knowledge won't be needed to bridge the analytical gap between a tragic case of cancer and the drug combination claimed to be responsible. An untested yet plausible hypothesis, judged according to the persuasiveness of the expert advancing it, will suffice. We call the experts who can pull it off Gods of the Analytical Gaps.