For several years now we’ve been trying to spread the word to the legal community that a great many people who hold themselves out as scientists, including more than a few who’ve published papers in the most prestigious peer reviewed journals around, aren’t really doing science. They’re not coming up with hypotheses and testing them. Instead of avoiding that pitfall which humans are particularly prone to falling in, the one whereby we become so enamored of our clever hypotheses that we simultaneously become blind to any holes and hostile to those who dare point them out, too many scientists are fooled by the ability of statistical analysis to readily generate spurious associations that, with a little bit of post hoc narrative editing, look just like causal associations.

The combination of vast amounts of data quickly sliced and diced by powerful modern computers plus multiple statistical methods which are poorly understood but easy to use has led to the current crisis in bio-medical science whereby only a shockingly small fraction of ”scientific discoveries” turn out to be true. The essence of the problem is well put by the quote that appears in the subject line of this blog post. It’s from Donald Berry, a biostatistician at MD Anderson Cancer Center, and he made it during a discussion of the issue at last January’s meeting of the President’s Council of Advisors on Science and Technology. You can watch that portion of the conference dealing with irreproducible science here; it’ll take less than an hour of your time and is well worth it.

If you watch the webcast linked above you’ll hear concerned scientists explaining that a lot of other well-meaning scientists fail to comprehend the scientific method, are fooled by statistical tools they don’t understand, or both; and that more and better education is the answer. This idea, that with a little more of the right sort of education we’d get better science, assumes that nobody is trying to game the system. We’re to assume for example that: (1) no one is hatching his hypothesis after the computer has found the inevitable statistically significant associations that arise from looking at any bucket of data from multiple perspectives (if you doubt that finding something statistically significant in any random batch of numbers is easy then spend 60 seconds on An Exact Fishy Test); (2) no one is p-hacking his way to confirmatory evidence for his favored hypothesis by turning random noise into seeming proof; (3) no one consciously uses a test that is biased in favor of validating his method; and, (4) no one is exploiting the decision-making heuristics of peer reviewers and editors to sneak bad science into leading journals. If an article in this January’s The Cancer Letter is any indication we shouldn’t be too sure of such assumptions.

You need to read Duke Officials Silenced Med Student Who Reported Trouble in Anil Potti’s Lab for several reasons. First, it’s the story of a brave young man who risked his career by refusing to participate in and attempting to expose research practices that were shoddy at best and fraudulent at worst. Second, it’s about how an article published in Nature Medicine went from revolutionary to retracted. Third, it details how an institution dedicated to education was willfully blind to the rot that had set in at one of its most prominent laboratories even after the rot was pointed out. Fourth, it reminds us that bad science isn’t a victimless crime – that desperate cancer patients endure worthless and time-robbing clinical trials as a result of it. Finally, the article reminds us of the power of our adversarial legal system and the good it can do by bringing truth to light. Though the Institute of Medicine had investigated and Dr. Berry and others had pointed out the flaws in the since retracted article in the end everyone, perhaps out of a sense of collegiality, put the failings down to sloppy work and it looked like the worst thing Potti was guilty of was resume inflation. But then came the lawyers for the patients. They uncovered the emails and audio recordings showing, if intent can be inferred from conduct, that the data dredging, cherry-picking and non-test testing used to construct Potti’s revolutionary finding and to justify the clinical trials was done quite deliberately.

So enjoy the read, remember that bad science can be hard to spot, that provenance is no guarantee of good science, and maybe take a little pride in the fact that the tort system once again has helped to advance the cause of truth.