Although peer-reviewed scientific evidence is central to most mass tort and product liability litigations, peer review is not foolproof and the fact that an article has been peer-reviewed does not guarantee high quality or even scientific accuracy.
As an editor of the Journal of the American Medical Association explained, there is “no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.”[i]
Accordingly, it is important for counsel to consider the potential weaknesses of peer-reviewed studies relied on by plaintiffs’ experts. This article discusses the substance and procedure of how such studies may be undermined or limited.
- Peer-Review Failure
There have been some spectacular failures of peer review, including articles published in JAMA and other leading medical journals:
Lumpectomy/breast cancer—Prior to 1985, most patients diagnosed with breast cancer underwent a mastectomy, a disfiguring surgical removal of the entire breast. Treatment was revolutionized when a study was published in the New England Journal of Medicine reporting that breast-conserving lumpectomy was just as effective as mastectomy for early-stage breast cancer. Unknown at the time was that one of the investigators had falsified surgical and laboratory study data. Even after the fraud was uncovered in the early 1990s, it was not disclosed to physicians, patients or the public. Although subsequent re-analysis of the data (excluding the falsified data) confirmed the study’s key finding, for many patients, life-and-death decisions were made on the basis of fraudulent peer-reviewed data.[ii]
MMR vaccine/autism—In 1998, Lancet published a peer-reviewed study linking the measles-mumps-rubella vaccine to autism. The study was the opening shot in a decades-long controversy that endangered the public health as parents agonized over whether to immunize their children and the plaintiffs’ trial bar pressed a litigation assault on vaccine manufacturers. Undisclosed was the fact that the lead author of the study had received payments from a plaintiff’s attorney, that the methods purported to have been used were not followed, and that required ethics approvals for pediatric subjects had not been obtained. In 2010, the lead author was sanctioned and Lancet “fully retracte[ed] this statement from the published record.”[iii]
Viagra/blindness—In 2006, a study published in the British Journal of Ophthalmology purportedly linked Viagra to NAION, a condition that can cause blindness. That article fueled multidistrict product liability litigation and the lead author became plaintiffs’ key expert witness. The court initially denied a Daubert challenge seeking to exclude the expert, principally because the study was published in a peer-reviewed journal,[iv] but did permit discovery regarding the study. The discovery revealed substantial inaccuracies in the study data, errors in the statistical methods, and mistakes in the computer programming as well as other flaws, leading the court to reconsider and reverse its initial decision. The court concluded that “Almost every indicia of reliability the Court relied on in its previous Daubert Order regarding the McGwin Study has been shown now to be unreliable. Peer review and publication mean little if a study is not based on accurate underlying data.”[v]
Accutane/depression & suicide—In 2005, the American Journal of Psychiatry published a study linking Accutane to changes in brain function implicated with depression. Although the article did note that funding had been provided by lawyers involved with Accutane litigation and acknowledged some limitations, undisclosed and undetected by peer review were flaws in the data and methodology that rendered the study scientifically worthless. Similar to the Viagra NAION case, a court decision found that while the study “was peer reviewed, [the researcher] admitted that he did not in fact follow the steps described in the article.”[vi] The researcher “could not document much of the data on which his published results were based.”[vii] “[H]e admitted that some of the statistical analysis was inaccurate,” and “he admitted that some of the [data] he used in his calculations were inaccurate, [but] could not check the accuracy of the remaining numbers because the original data could not be retrieved.”[viii] Based on these flaws, the court ruled that the expert could not rely on the study.
- Challenging Scientific Literature—Substance
- Did the Researcher Actually Utilize the Methods Represented to Have Been Used?
Manuscripts reporting original scientific research are generally divided into four sections: Introduction, Methods, Results and Discussion. The “methods” section is crucial because it permits readers to assess precisely how the investigator conducted the experiment and to replicate the experiment, which is the hallmark of true peer review.
Recent cases have revealed instances where the researchers have not followed the methodology represented to have been used in their studies. In the Viagra case, the researcher represented in the published paper that he counted subjects as “exposed” to Viagra only if they used the medication before they developed NAION; the court found that the researcher did not adhere to this methodology and that a number of subjects who were counted as exposed had been diagnosed with NAION before they first used the medication (obviously the medication could not have caused the condition in that circumstance).[ix] Similarly, in the Accutane case, the Court found that the researcher “did not actually use the methodology he claimed to have used,” and that “contrary to representations made in the article, he did not get before-and-after . . . questionnaires from many of the subjects.”[x] Likewise, in the MMR vaccine case, the Lancet retraction reports that the investigator did not adhere to the methods claimed in the study: “In particular, the claims in the original paper that children were ‘consecutively referred’ and that investigations were ‘approved’ by the local ethics committee have been proven to be false.”[xi]
- Were the Statistical Calculations Done Properly?
Statistical analysis is crucial to interpreting the results of a study. Statistics help to determine whether the observed results of an experiment are a function of chance or whether there is a causal relationship. Guidelines issued by the International Committee of Medical Journal Editors specify that a biomedical study should provide sufficient statistical detail so an independent researcher can verify the claimed results.[xii]
Biomedical journal peer reviewers do not ordinarily check the statistical calculations to verify the results reported in a study.[xiii] That omission leaves room for error. Recent cases show that it is not safe to assume that statistical calculations in published studies are valid.
In the Viagra case, the court found that the statistical “methodologies described in the study were not the actual methodologies used.”[xiv] In addition, the court found that the statistical computer programming “code that the [the researcher] wrote to produce the numbers in the McGwin Study contained errors that would affect the odds ratios and confidence intervals.”[xv] Similarly, in the Accutane case, the investigator “admitted that some of the statistical analysis was inaccurate.”[xvi]
Recently, in the Zoloft birth defects litigation, a defense expert epidemiologist noticed a subtle anomaly in the odds ratio and confidence interval of a key finding. (The upper and lower bounds of the confidence interval should be symmetric around the odds ratio; in this case, the expert observed that when the data were graphed they appeared asymmetric.) The authors of the paper were contacted and they promptly published a correction in the New England Journal of Medicine.[xvii] The correction played an important role in the Court’s decision to exclude plaintiffs’ biostatistician expert because the corrected odds ratio was no longer statistically significant. The court found that the expert “rel[ied] upon replication of statistically significant (and borderline significant) results . . . [but] did not reconcile this [new] information with his opinions.”[xviii]
The takeaway message: defense counsel should have their experts carefully scrutinize, and attempt to replicate (to the extent possible), any key finding relied upon by plaintiffs’ experts.[xix]
- Does the Dataset That Was Analyzed Accurately Reflect the Condition of the Subjects?
Before any statistical analysis can be performed, data needs to be collected and recorded. There is considerable opportunity for error in that process. Initial data collection often involves making entries on paper forms; such entries may be ambiguous or inconsistent (recall the highly contentious 2000 presidential election where hanging chads and other ambiguities made it difficult or impossible to assign certain ballots to a candidate). If the initial data is collected on paper forms, it will need to be entered into an electronic dataset and errors may be made in key punching.
A well-done study will, of course, utilize procedures, including quality control procedures, to minimize and correct data entry errors. Indeed, the International Society for Pharmacoepidemiology has promulgated good practice guidelines to insure data quality and integrity.[xx]
Nevertheless, peer review does not guarantee that good practices have been observed or that the electronic dataset accurately represents the study population. Even apart from intentional falsification of data—such as occurred in the breast cancer lumpectomy trials—recent cases demonstrate that studies published in peer-reviewed publications may contain serious data errors. In the Viagra case, the court found that there were “miscodings” in the electronic dataset and that “the discrepancies between the dates of first use on the original survey forms and in the electronic dataset raise serious concerns about the reliability of the McGwin Study as originally published.”[xxi] Similarly, in the Accutane case, the court found that the researcher “admitted that some of the [data] he used in his calculations were inaccurate, [but] could not check the accuracy of the remaining numbers because the original data could not be retrieved.”[xxii]
In testing the validity of a study, it is important to obtain the original data collection forms if possible so they can be compared to the electronic dataset.
- Does the Study Properly Address Confounding, Bias and Chance?
Chance—”A study may find a positive association (relative risk greater than 1.0) when there is no true association.”[xxiii] A chance finding attributable to random error is called a false-positive. Requiring that a finding be statistically significant reduces the play of chance, but does not eliminate it. Indeed, under the conventional definition of statistical significance, 1 in 20 statistically significant findings will be the result of chance. If the investigators perform multiple comparisons—by conducting numerous statistical tests on the same dataset—the likelihood of spurious associations increases.[xxiv] In the Zoloft birth defects litigation, the court excluded plaintiffs’ expert epidemiologist in part because she relied on findings that might have been “statistical artifacts of multiple comparisons.”[xxv]
Bias—Bias is “anything that results in a systematic (nonrandom) error in a study result and thereby compromises its validity.”[xxvi] There are many types of bias that may creep into a study, such as selection bias (where there is some intrinsic difference between cases and controls) or detection bias (where cases routinely undergo more frequent medical testing than controls).
Confounding —”The third major reason for error in epidemiologic studies is confounding.”[xxvii] That is, what may appear to be a relationship between an exposure and an outcome is actually due to the fact that both the exposure and outcome are related to another variable that was overlooked or could not be accounted for in the analysis. For example, the studies investigating whether there may be a relationship between maternal use of antidepressant medications and birth defects note that depression is associated with co-morbidities that increase the risk of birth defects and that it is difficult to ascertain whether an observed risk is attributable to the medication or to the underlying indication for which the medication is prescribed (this is called “confounding by indication”).
All studies should be carefully scrutinized for weaknesses and limitations attributable to confounding, bias and chance.
- Are the Findings Based on Post-Hoc Subgroups?
A well-designed study has a clearly articulated hypothesis that is being investigated, and in the case of biomedical studies, in a specifically defined population. Thus, the International Committee of Medical Journal Editors specifies that the introduction of a study should:
State the specific purpose or research objective of, or hypothesis tested by, the study or observation. . . . Both the main and secondary objectives should be clear, and any prespecified subgroup analyses should be described.[xxviii]
The notion that subgroup analyses should be limited to subgroups that are pre-defined as part of the hypothesis under investigation, and before data collection, is critical. Subgroups defined after the data has been collected are inherently suspect because if any set of data is partitioned into small subsets it is likely (or inevitable, if there are enough subgroups) that some subset will show a statistically significant difference, even if there is no real underlying difference between the groups.
In other words, “subgroup analyses are problematic because as you do multiple comparisons, you may get statistically significant results purely as a function of the subgroup and chance.”[xxix] For that reason, post hoc subgroup analysis “smacks of betting on a horse after the race is over.”[xxx] As an example, if one examines each of the Zodiac signs separately in a trial of aspirin to treat heart attacks (a therapy that has been proven to be effective), one would conclude that most Zodiac signs derive benefit from aspirin, but those born under the signs Libra and Gemini are actually harmed by aspirin.[xxxi] Thus, even if there is no intentional manipulation, data dredging subgroups after the fact will often yield a statistically significant result and a creative researcher can find a scientifically plausible explanation to fit virtually any observed result.
Accordingly, counsel should be wary of studies that report results that are statistically significant only as to select subgroups. It is worth investigating the underlying study protocol to determine whether the subgroup was pre-specified or defined after the data was collected. As one court stated, “[I]t is not good scientific methodology to highlight certain elevated subgroups as significant findings without having earlier enunciated a hypothesis to look for or explain particular patterns.”[xxxii]
- Did Matrixx v. Siracusano Eliminate the Legal Requirement of Statistical Significance?
In General Electric Co. v. Joiner, 522 U.S. 136, 145–47 (1997), the Supreme Court affirmed exclusion of expert testimony as unreliable because, among other things, it was predicated on studies whose findings were statistically insignificant. Accordingly, federal courts generally require statistically significant epidemiological proof of causation under Daubert.[xxxiii]
Plaintiffs have attempted to chip away at the statistical significance requirement citing a dictum in Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011). However, Matrixx is not a case about the standard for reliable expert testimony under Federal Rule of Evidence 702; it is a securities fraud case about the standard for materiality concerning information that would be significant to an investor. The defendant argued that adverse event reports were not material information because the number of such reports did not establish a statistically significant risk that the product was causing the adverse events. The Supreme Court rejected this argument and held that adverse event reports can be material to securities disclosure obligations even absent statistical significance. In its opinion, the Supreme Court made clear that it was not even considering—much less ruling—that a scientific expert may reliably conclude that causation exists predicated on findings that are not statistically significant:
We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.
131 S. Ct. at 1319 (internal citations omitted) (emphasis added). In the Zoloft birth defects litigation, the Court explained that Matrixx was inapplicable and held that statistical significance remained a requirement, rejecting the so-called “Rothman approach” that places diminished weight on statistical significance.[xxxiv]
- Has the Author Disclosed Her Litigation Consulting Work?
Virtually all journals require that authors disclose conflicting financial interests. Thus, where an author has been retained by a party as a litigation expert, and for that reason has a financial interest in the outcome of the litigation, there is a disclosure obligation.
The “Uniform Requirements for Manuscripts Submitted to Biomedical Journals,” promulgated by the International Committee of Medical Journal Editors, specify that “Financial relationships (such as employment, consultancies, stock ownership, honoraria, and paid expert testimony) are the most easily identifiable conflicts of interest and the most likely to undermine the credibility of the journal, the authors, and of science itself,” and place responsibility on the author/litigation expert “for disclosing all financial and personal relationships that might bias their work.”[xxxv] Nevertheless, litigation experts have been known to fail to provide the required disclosure.
In July 2008, Lancet Oncology published an article regarding smokeless tobacco and cancer in which the conflict disclosure stated: “The authors declare no conflicts of interest.”[xxxvi] Shortly after publication, the journal learned that one of the authors, Dr. Steven Hecht, had been working as a plaintiffs’ litigation expert, and promptly published a correction: “During the immediate months preceding submission of the review [Dr. Hecht] was acting in the capacity of an expert witness for the plaintiff in a future court case against a smokeless tobacco company. [Dr. Hecht] declares his participation in this case in no way influenced his writing or involvement in the review.”[xxxvii]
The Committee on Publication Ethics, which promulgates guidelines adopted by many scientific journals, has found instances of nondisclosure of litigation consulting, which it characterizes as “a major conflict” of interest. COPE recommends that journal editors investigate any alleged failure to disclose expert litigation work and that they either require disclosure or refuse to publish the manuscript.[xxxviii]
Thus, where a litigation expert has published on a topic pertinent to a case, the timing of the expert’s retention and disclosure to the publication should be explored.
- Confronting Peer-Reviewed Articles—Procedure
- Are the Underlying Raw Data, Protocols and Statistical Calculations Discoverable?
As noted previously, even absent a litigation in which an expert is relying on a scientific study, peer-reviewed medical journals are increasingly requiring authors to make available upon request their original unprocessed data, including the study protocol, the electronic dataset used in the analysis, and the computer code used to analyze the data and generate the statistical results. Thus, the American College of Epidemiology has published a policy statement encouraging data sharing;[xxxix] the Annals of Internal Medicine has adopted a “reproducible research” initiative that “require[s] authors to state whether they are willing to share the protocol, data or statistical code;”[xl] the National Institutes of Health require data-sharing for all grants with funding in excess of $500,000;[xli] and federal regulations provide that research data collected with federal funds must be made available under the Freedom of Information Act.[xlii] Procedures have been devised for researchers to prepare their underlying data in a format suitable for sharing that protects confidentiality and medical privacy (including compliance with the Health Insurance Portability and Accountability Act).[xliii]
When litigation is involved, it is a bedrock legal principal that a litigant “‘has a right to every man’s evidence,’ except for those persons protected by a constitutional, common-law, or statutory privilege.”[xliv] Where a scientific study is central to the opinion of an expert in a product liability litigation, the law is well settled that the underlying study data is obtainable by subpoena. The seminal case is Deitchman v. E.R. Squibb & Sons, Inc.[xlv] In that case, Squibb and other pharmaceutical companies were defendants in actions alleging that diethylstilbestrol (DES) caused vaginal adenocarcinoma in the daughters of women who used the medication. Squibb served a document subpoena upon Dr. Arthur Herbst, a researcher who maintained a registry of vaginal adenocarcinoma cases and published more than a dozen articles regarding DES and adenocarcinoma.
Although Dr. Herbst was not engaged as an expert in the litigation, plaintiffs’ experts relied on his studies in support of their product liability claims. Dr. Herbst moved to quash the subpoena on the grounds that it was burdensome and oppressive, and more importantly, that it would jeopardize patient confidentiality and deter patients and physicians from supplying the registry with data in the future. The Seventh Circuit reversed the trial court decision quashing the subpoena. The court found that Squibb had a compelling need to examine the underlying data in order to test the validity of the studies:
The value of the conclusions turns on the quality of the data and the methods used by the researcher. . . . So if the conclusions or end product of a research effort is to be fairly tested, the underlying data must be available to others equally skilled and perceptive. . . . [A] study of this sort may have a number of different but inadvertent, biases present. . . . For Squibb to prepare properly a defense on the causation issue, access to the Registry data to analyze its accuracy and methodology is absolutely essential.[xlvi]
The court ruled that a protective order could be fashioned to compensate Dr. Herbst for his time and to protect medical privacy.[xlvii]
A similar result was obtained in the multidistrict phenylpropanolamine (PPA) products liability litigation. Plaintiffs, claiming to have suffered strokes as a result of using over-the-counter cough and cold medications that contained PPA, relied on an epidemiologic study, the Yale Hemorrhagic Stroke Project (HSP). Defendant pharmaceutical manufacturers served “a series of subpoenas on hospitals possessing medical records for participants” in the Yale Study so they could “verify the accuracy of the data underlying the HSP and to clarify the extent to which the HSP participants were scrutinized for ‘potential stroke risk confounders.’”[xlviii] The court denied a motion to quash the subpoenas and directed the parties to work with the hospitals to establish a redaction protocol so the underlying data could be produced.
In the multidistrict hormone replacement therapy (HRT) litigation, defendants subpoenaed and obtained underlying data from the Women’s Health Initiative study, which is the cornerstone of plaintiffs’ claims that their use of HRT caused breast cancer. As in the DES and PPA cases, the MDL court supervising the HRT litigation entered orders directing the Fred Hutchinson Cancer Research Center to produce underlying data, subject to certain restrictions.[xlix]
Thus, it is clear that researchers who publish studies relevant to product liability litigation can be compelled to produce their underlying data, protocols and statistical calculations, subject to a suitable protective order.
- Are Peer-Reviewed Comments, Criticisms and Related Documents Discoverable from Scientific Journals?
The International Committee of Medical Journal Editors obligates medical journals to hold peer-review communications confidential and to oppose requests for discovery:
Editors must not disclose information about manuscripts (including their receipt, content, status in the reviewing process, criticism by reviewers, or ultimate fate) to anyone other than the authors and reviewers. This includes requests to use the materials for legal proceedings.[l]
In two recent decisions, both involving the arthritis medications Bextra and Celebrex, district courts in Massachusetts and Illinois have sided with medical journals and held that peer-review communications are not discoverable. In May 2007, Pfizer, which was a defendant in multidistrict product liability litigation involving the medications, served the New England Journal of Medicine with a subpoena seeking peer-review documents concerning 11 articles the journal had published about the drugs, as well as peer-review documents relating to manuscripts that the journal had rejected. Pfizer subsequently limited its requests to “(1) the complete record of communications between the NEJM editors and the authors of any articles (published or unpublished) concerning Celebrex or Bextra and (2) copies of any documents produced, voluntarily or otherwise, in connection with any dispute concerning Celebrex or Bextra.”[li] The journal advised that it had no documents responsive to request (2), and it moved in the District of Massachusetts to quash the subpoena as to request (1).
Granting the journal’s motion to quash, the court found that editors and peer reviewers were entitled to the same type of confidentiality as journalists and that “the batch or wholesale disclosure by the NEJM of the peer reviewer comments communicated to authors will be harmful to the NEJM’s ability to fulfill both its journalistic and scholarly missions.”[lii]
Pfizer also served the Journal of the American Medical Association with a subpoena that was virtually identical to the one served on the New England Journal of Medicine, and when JAMA objected, Pfizer filed a motion to compel in the Northern District of Illinois. Reaching a result similar to the one involving the NEJM, the court denied Pfizer’s motion to compel, holding that “any probative value would be outweighed by the burden imposed on the Journals in invading the sanctity” of the peer-review process, and that “it is not unreasonable to believe that compelling production of peer review documents would compromise the process.”[liii]
Thus, the case law disfavors subpoenas to scientific journals seeking discovery of documents concerning peer review of articles that are relied on by experts in product liability litigations.
- Exclusion of the Study and/or the Expert
Daubert charges trial courts with the gatekeeping responsibility of keeping outside of the courtroom scientific evidence that is unreliable. If the crux of an expert’s opinion is based on a peer-reviewed study that is found to be unreliable, the expert’s opinion will be excluded.
In the Viagra case, in its initial decision, the court ruled that plaintiffs’ general causation evidence was admissible, finding it to be reliable principally because of a study published in a peer-reviewed journal. However, after discovery demonstrated that the study was critically flawed notwithstanding peer review, the court reconsidered the earlier decision and concluded the study was fatally flawed:
In its previous Daubert ruling, the Court placed great weight on the fact that the McGwin Study had been peer reviewed and published by the Journal, and that the study had not been produced using post-litigation data. As noted above, however, numerous miscodings and errors have rendered the McGwin Study as published unreliable.[liv]
The court then considered whether there was a sufficient basis for the expert’s opinion absent that particular study and concluded there was not.[lv] In a companion opinion, the court granted summary judgment dismissing the cases holding that “[b]ecause Plaintiffs have failed to produce admissible expert testimony that Viagra caused their NAION, Pfizer’s motion for summary judgment must be granted.”[lvi]
On the other hand, if there is a reliable basis for the expert’s opinion that is independent of the flawed study, then the opinion may be admissible even without the study. Thus, in the Accutane case, the appellate court affirmed the lower court decision excluding the expert’s study, finding it to be “not soundly and reliably generated,” but nevertheless remanded the matter to the trial court “to consider whether [the expert] should be permitted to testify as an expert on general causation without reference to the PET study.”[lvii]
- Correction or Retraction of the Study
Many scientific journals are members of the Committee on Publication Ethics, an organization “concerned with integrity of peer-reviewed publications in science, particularly biomedicine.”[lviii] COPE has promulgated a Code of Conduct that provides for correction or retraction of flawed studies:
Whenever it is recognised that a significant inaccuracy, misleading statement or distorted report has been published, it must be corrected promptly and with due prominence.
If, after an appropriate investigation, an item proves to be fraudulent, it should be retracted. The retraction should be clearly identifiable to readers and indexing systems.[lix]
Consistent with the COPE guidelines, Lancet published a retraction of the MMR vaccine autism article, and Lancet Oncology published a correction of the smokeless tobacco article to disclose the author’s work as a litigation expert. Similarly, in the Viagra NAION litigation, the medical journal retracted the article, and in the Zoloft birth defects litigation the medical journal published a correction.
Thus, if discovery reveals significant flaws in a published study, the journal may be willing to correct or retract the publication.
- Scientific Integrity Investigations
The Office of Research Integrity (ORI), part of the Department of Health and Human Services, promotes integrity in biomedical and behavioral research supported by the U.S. Public Health Service by defining research misconduct and overseeing institutional investigations of misconduct. Prior to June 2005, ORI regulations specified that
Misconduct or Misconduct in Science means fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research. It does not include honest error or honest differences in interpretations or judgments of data.[lx]
This provision was controversial because the phrase “or other practices that seriously deviate from those that are commonly accepted within the scientific community” is vague and because it could be construed to encompass non-intentional misconduct.[lxi] The regulation was amended effective June 2005. ORI’s current definition of “research misconduct” is limited to three specific acts (fabrication, falsification and plagiarism), each of which requires intent as a necessary element of the offense:
Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.
- Fabrication is making up data or results and recording or reporting them.
- Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.
- Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.
- Research misconduct does not include honest error or differences of opinion.[lxii]
Besides federal regulations, individual institutions may have their own policies regarding scientific integrity and research misconduct that govern researchers affiliated with the institution.[lxiii] Thus, depending upon the degree of culpability, the source of funding, and the institutional affiliation of the researcher, a scientific integrity investigation may be initiated if there is serious misconduct in a study. Federal regulations and institutional policies have established procedures for a party to file a scientific misconduct complaint.
- Private Civil Action Against Investigator
If an investigator publishes a study that fraudulently or negligently links a product to a harmful effect, does the manufacturer have a civil remedy against the investigator? For instance, could the manufacturer of the MMR vaccine state a claim against Dr. Wakefield, who was found culpable of misconduct and whose study Lancet retracted?
There are no reported cases on point. In CSX Transportation v. Gilkison, the plaintiff railroad company, which had been sued by thousands of plaintiffs alleging asbestos injuries, sued a radiologist and a plaintiffs’ law firm with orchestrating a fraudulent mass x-ray screening process to manufacture cases.[lxiv] Although the district court initially dismissed most of the claims, the Fourth Circuit reversed, and defendant ultimately prevailed at trial.
Clearly, bringing a lawsuit against a researcher should be reserved for extreme cases because it could be construed as an attempt to stifle academic freedom and scientific discussion. In 2007, a Congressional Committee released a staff report describing what it characterized as “intimidation” of an “independent scientist” who had discussed a possible increased cardiovascular risk of the diabetes drug Avandia at scientific meetings of the Endocrine Society and the American Diabetes Association.[lxv] The staff report asserted that the manufacturer of the medication “silenced [the physician’s] concerns about Avandia by complaining to his superiors and threatening a lawsuit.”[lxvi] The staff report also alleged an attempt to intimidate a researcher who had expressed concerns about Vioxx. In its conclusion, the staff report states “Corporate intimidation, the silencing of scientific dissent, and the suppression of scientific views threaten both the public well-being and the financial health of the federal government, which pays for health care.”[lxvii]
Peer-reviewed scientific evidence is central—often outcome-determinative—to product liability and mass tort litigation. Because peer review is far from perfect, such evidence can be and should be vigorously challenged. Scientific journals and courts have recognized that true peer review requires replication by independent scientists. Researchers should be willing to provide their underlying data, study protocols and statistical programming. If necessary, the case law indicates that a party can compel such disclosure, subject to a suitable protective order.
Click here to read more articles from our latest Product Liability Newsletter.