Although only a small number of reanalyses of data from randomized clinical trials (RCTs) have been published, an examination of those that have been conducted finds that about one-third led to changes in findings that implied conclusions different from those of the original article regarding the types and number of patients who should be treated, according to a study in the September 10 issue of JAMA. 1
Some authors have argued that a standard of RCT data sharing and reanalysis should be more widely adopted and could have major consequences for individual and public health, and that paying consumers (the public) should have access to complete information about drugs and devices. Arguments against accessibility to raw data and reanalyses include potential risk to trial patient confidentiality; inappropriate use of data sets, resulting in spurious findings; release of commercially sensitive information; and “rogue” reanalysis by nonexperts or by analysts who have conflicts of interest, according to background information in the article.
Shanil Ebrahim, Ph.D., of Stanford University, Stanford, Calif., and colleagues conducted an electronic search of MEDLINE to identify all published studies (through March 9, 2014) that completed a reanalysis of individual patient data from previously published RCTs addressing the same hypothesis as the original RCT. The primary outcomes examined were changes in direction and magnitude of treatment effect, statistical significance, and interpretation about the types or numbers of patients who should be treated.
The researchers identified 37 reanalyses of patient-level data from previously published RCTs (reported in 36 articles). Most of the reanalyses were completed by authors involved in the original trial; five were performed by entirely independent authors. Reanalyses differed most commonly in statistical or analytical approaches (n = 18) and in definitions or measurements of the outcome of interest (n = 12). Four reanalyses changed the direction and two changed the magnitude of treatment effect; four led to changes in statistical significance of findings.
Approximately a third (35 percent) of the reanalyses led to interpretations different from that of the original article, 3 (8 percent) showing that different patients should be treated; 1 (3 percent) that fewer patients should be treated; and 9 (24 percent) that more patients should be treated.
“It is difficult to assess whether these changes in trial conclusions led eventually to major changes in clinical practice and, if so, how large these changes were. Clinical practice choices depend only partly on trial evidence, and sometimes multiple additional trials exist that inform the same question. Nevertheless, when contradicting messages exist, it is unclear which of the 2 discrepant articles will have more influence: the original is usually published in more influential journals, but the subsequent reanalysis may be viewed as a more correct appraisal of the data,” the authors write.
Harlan M. Krumholz, M.D., S.M., of the Yale University School of Medicine, New Haven, Conn., and Eric D. Peterson, M.D., M.P.H., of the Duke University Medical Center, Durham, N.C., and Associate Editor, JAMA, comment on this study in an accompanying editorial.2
“Replication is a vital part of the scientific method. Fields outside of medicine have already embraced sharing experimental data, as have the basic biological sciences within medicine. The culture of clinical research in medicine will need to evolve for open science to succeed. The recognition that one trial can potentially lead to different findings and conclusions depending on many discretionary decisions that are made about the data and reanalyses almost mandates that those choices are transparent and described in detail—and that others have the chance to replicate them. Rather than the rare exception, open science and replication should become the standard for all trials and especially those that have high potential to influence practice.”